{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:56.577148Z" }, "title": "Mr. TYDI: A Multi-lingual Benchmark for Dense Retrieval", "authors": [ { "first": "Xinyu", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "" }, { "first": "Xueguang", "middle": [], "last": "Ma", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "" }, { "first": "Peng", "middle": [], "last": "Shi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present Mr. TYDI, a multilingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, designed to evaluate ranking with learned dense representations. The goal of this resource is to spur research in dense retrieval techniques in non-English languages, motivated by recent observations that existing techniques for representation learning perform poorly when applied to out-of-distribution data. As a starting point, we provide zero-shot baselines for this new dataset based on a multilingual adaptation of DPR that we call \"mDPR\". Experiments show that although the effectiveness of mDPR is much lower than BM25, dense representations nevertheless appear to provide valuable relevance signals, improving BM25 results in sparse-dense hybrids. In addition to analyses of our results, we also discuss future challenges and present a research agenda in multilingual dense retrieval. Mr. TYDI can be downloaded at https://github.com/ castorini/mr.tydi. \u0c10\u0c06#\u0c0e%\u0c0e&\u0c0e&-1)\u0c40 \u0c09\u0c2a\u0c17/ 0 \u0c2c\u0c303\u0c355 \u0c0e\u0c02\u0c24? (What is the weight of the IRNSS-1C satellite?) \u0c38\u0c2e#\u0c26% \u0c02\u0c32(*\u0c35,\u0c3f \u0c02.\u0c47 \u0c051 23 \u0c264 \u0c1c\u0c02\u0c247\u0c358 \u0c0f:; ? (Which is the largest marine animal?) answerable by in-language Wikipedia unanswerable by in-language Wikipedia \u2026\u0c2a9 :\u0c17 \u0c38\u0c2e\u0c2f\u0c02\u0c32?, \u0c07\u0c02\u0c27\u0c28 \u0c38DE \u0c24\u0c02F\u0c3e \u0c10\u0c06#\u0c0e%\u0c0e&\u0c0e&-1)\u0c40 \u0c09\u0c2a\u0c17/ \u0c39\u0c02 \u0c2c\u0c303\u0c355 1425.4MN \u0c32?\u0c32O...", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "We present Mr. TYDI, a multilingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, designed to evaluate ranking with learned dense representations. The goal of this resource is to spur research in dense retrieval techniques in non-English languages, motivated by recent observations that existing techniques for representation learning perform poorly when applied to out-of-distribution data. As a starting point, we provide zero-shot baselines for this new dataset based on a multilingual adaptation of DPR that we call \"mDPR\". Experiments show that although the effectiveness of mDPR is much lower than BM25, dense representations nevertheless appear to provide valuable relevance signals, improving BM25 results in sparse-dense hybrids. In addition to analyses of our results, we also discuss future challenges and present a research agenda in multilingual dense retrieval. Mr. TYDI can be downloaded at https://github.com/ castorini/mr.tydi. \u0c10\u0c06#\u0c0e%\u0c0e&\u0c0e&-1)\u0c40 \u0c09\u0c2a\u0c17/ 0 \u0c2c\u0c303\u0c355 \u0c0e\u0c02\u0c24? (What is the weight of the IRNSS-1C satellite?) \u0c38\u0c2e#\u0c26% \u0c02\u0c32(*\u0c35,\u0c3f \u0c02.\u0c47 \u0c051 23 \u0c264 \u0c1c\u0c02\u0c247\u0c358 \u0c0f:; ? (Which is the largest marine animal?) answerable by in-language Wikipedia unanswerable by in-language Wikipedia \u2026\u0c2a9 :\u0c17 \u0c38\u0c2e\u0c2f\u0c02\u0c32?, \u0c07\u0c02\u0c27\u0c28 \u0c38DE \u0c24\u0c02F\u0c3e \u0c10\u0c06#\u0c0e%\u0c0e&\u0c0e&-1)\u0c40 \u0c09\u0c2a\u0c17/ \u0c39\u0c02 \u0c2c\u0c303\u0c355 1425.4MN \u0c32?\u0c32O...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Retrieval approaches based on learned dense representations, typically derived from transformers, form an exciting new research direction that has received much attention of late. These dense retrieval techniques generally adopt a supervised approach to representation learning, where a labeled dataset is used to train two encoders-one for the queries and the other for texts from the corpus to be retrieved-whose output representation vectors are then compared with a simple comparison function such as inner product. Retrieval against a large text corpus is typically formulated as nearest neighbor search and efficiently executed using offthe-shelf libraries. In the literature, this is known as a \"bi-encoder\" design. Well-known examples include DPR (Karpukhin et al., 2020) , ANCE (Xiong et al., 2021) , and ColBERT (Khattab and Zaharia, 2020) , but there is much recent work along these lines (Gao et al., 2020; Hofst\u00e4tter et al., 2020; Hofst\u00e4tter et al., 2021; Lin et al., 2021b) , just to list a few papers.", "cite_spans": [ { "start": 755, "end": 779, "text": "(Karpukhin et al., 2020)", "ref_id": "BIBREF10" }, { "start": 787, "end": 807, "text": "(Xiong et al., 2021)", "ref_id": "BIBREF22" }, { "start": 822, "end": 849, "text": "(Khattab and Zaharia, 2020)", "ref_id": "BIBREF11" }, { "start": 900, "end": 918, "text": "(Gao et al., 2020;", "ref_id": "BIBREF6" }, { "start": 919, "end": 943, "text": "Hofst\u00e4tter et al., 2020;", "ref_id": "BIBREF7" }, { "start": 944, "end": 968, "text": "Hofst\u00e4tter et al., 2021;", "ref_id": "BIBREF8" }, { "start": 969, "end": 987, "text": "Lin et al., 2021b)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Like all methods based on supervised machine learning, the effectiveness of the trained models on \"out of distribution\" (OOD) samples is an important issue since it concerns model robustness and generalizability. For dense retrieval, training data typically comprise (query, relevant passage) pairs, and in this context, OOD could mean that (1) the passage encoder is fed text from a different domain, genre, register, etc. than the training data, (2) the query encoder is fed queries that are different from the training queries, (3) the relationship between the inputs at inference time is different from the training samples (e.g., task variations), or (4) a combination of all of the above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "It is, in fact, already known that dense retrieval techniques generalize poorly across different corpora, queries, tasks, etc. Recently, Thakur et al. (2021) constructed a benchmark to specifically evaluate the zero-shot transfer capabilities of dense retrieval models by creating a framework that unifies over a dozen retrieval datasets spanning diverse domains. In a zero-shot setting, the authors found that BM25 remained the most effective overall. That is, dense retrieval techniques trained on one dataset can spectacularly fail on another datasetexactly the out-of-distribution challenges we discussed above. In contrast, BM25, \"just works\" regardless of the corpus and queries, even though on \"in distribution\" samples, dense retrieval models are unequivocally more effective. Thus, learned dense representations are not as general and robust as BM25 representations.", "cite_spans": [ { "start": 137, "end": 157, "text": "Thakur et al. (2021)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "This paper focuses on another aspect of the generalizability and robustness of learned representations for ranking: What if the encoders are applied to texts in languages different from the one they are trained in? Our focus is on mono-\u2026The blue whale is a marine mammal belonging to the baleen whale parvorder, Mysticeti. At up to in length and with a maximum recorded weight of , it is the largest animal known to have ever existed\u2026 lingual retrieval in non-English languages (e.g., Bengali queries against Bengali documents) rather than cross-lingual retrieval, where documents and queries are in different languages (e.g., English queries against Arabic documents). We view this work as having three main contributions: First, we construct and share Mr. TYDI, a multi-lingual benchmark dataset for mono-lingual retrieval in eleven diverse languages, designed to evaluate ranking with learned dense representations. This dataset can be viewed as the \"openretrieval\" condition of the TYDI multi-lingual question answering (QA) dataset (Clark et al., 2020) , and \"Mr\" in Mr. TYDI stands for \"multi-lingual retrieval\". We describe the construction of this dataset and how it is different from existing resources. Second, we report zero-shot baselines for Mr. TYDI, including a dense retrieval method based on a multi-lingual version of DPR (Karpukhin et al., 2020 ) that we call \"mDPR\". Third, we present a number of initial findings about baseline results that highlight future challenges and begin to define a research agenda in multi-lingual dense retrieval. Most interestingly, we find that although the zero-shot effectiveness of mDPR is much worse than BM25, dense representations appear to provide valuable relevance signals, improving BM25 results in sparse-dense hybrids.", "cite_spans": [ { "start": 1037, "end": 1057, "text": "(Clark et al., 2020)", "ref_id": "BIBREF3" }, { "start": 1340, "end": 1363, "text": "(Karpukhin et al., 2020", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In presenting a new benchmark dataset, one important question to answer is: Why is a new resource needed? We begin by addressing this question. The introduction already lays out the intellectual motivation for our work. Thus, we focus here on explaining why existing datasets are not sufficient. The answer is summarized in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 324, "end": 332, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Mr. TYDI is constructed from TYDI (Clark et al., 2020) , a question answering dataset covering eleven typologically diverse languages. For each language, the creators provided annotators with a prompt (the first 100 characters of a Wikipedia article), who were asked to write a question that cannot answered by the snippet. Then, for each question, annotators were given the top Wikipedia article returned by Google search and asked to label the relevance of each passage in the article as well as to identify a minimal answer span (if possible). Given this procedure, the answer to the question may or may not be found in the passages from the selected article. The answer passages are always in the same language as the questions. Note that the questions in different languages are not comparable as they are created independently rather than through translation.", "cite_spans": [ { "start": 34, "end": 54, "text": "(Clark et al., 2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "The weakness of TYDI from our perspective is that it is essentially a machine reading comprehension task like SQuAD (Rajpurkar et al., 2016) because candidate passages are all included as part of the dataset (i.e., the passages are from the top Wikipedia article returned by Google search). Instead, we need a resource akin to what QA researchers call the \"open-domain\" or \"open-retrieval\" task, where the problem involves retrieval from a much larger corpus (e.g., all of Wikipedia) (Chen et al., 2017) . Thus, at a high level, Mr. TYDI can be viewed as an open-retrieval extension to TYDI. Asai et al. (2021) created XOR-TYDI, a crosslingual QA dataset built on TYDI by annotating answers in English Wikipedia for questions TYDI considered unanswerable in the original source (non-English) language. This was accomplished by randomly sampling 5,000 unanswerable (non-English) questions from TYDI, and then searching English Wikipedia articles for answers. Specifically, each non-English question was first translated into English; then, annotators were given the top-ranked English Wikipedia articles and asked to label passages containing the answer.", "cite_spans": [ { "start": 116, "end": 140, "text": "(Rajpurkar et al., 2016)", "ref_id": "BIBREF18" }, { "start": 484, "end": 503, "text": "(Chen et al., 2017)", "ref_id": "BIBREF2" }, { "start": 592, "end": 610, "text": "Asai et al. (2021)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "The XOR-TYDI dataset contains three overlapping tasks, but all of them are focused on the cross-lingual condition. Among the three tasks (XOR-ENGLISHSPAN, XOR-RETRIEVE, and XOR-FULL), XOR-RETRIEVE is most comparable to our work, but the retrieval target for the task is explicitly English Wikipedia articles rather than Wikipedia articles in the question's language. While the XOR-FULL task requires systems to select answer spans from both English and targetlanguage Wikipedia articles, the dataset does not provide ground truth for the intermediate retrieval step, thus it cannot be used for our evaluations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "The annotations of XOR-TYDI do not allow us to examine mono-lingual retrieval in non-English languages because the creators started with \"unanswerable\" non-English TYDI questions. Furthermore, since all answer passage annotations were performed on English Wikipedia, this doesn't help if we are interested in, for example, searching Finnish Wikipedia with Finnish questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Another point of comparison worth discussing is MKQA (Longpre et al., 2020) , which comprises 10k question-answer pairs aligned across 26 typologically diverse languages. Questions are paired with exact answers in the different languages, and evaluation is conducted in the open-retrieval setting by matching those answers in retrieved text. There are two main differences between our work and MKQA: First, MKQA was created via translation in order to achieve cross-lingual alignment. In addition to the possible translation artifacts that such a process might introduce, which Clark et al. (2020) discussed at length, we argue that forced alignment creates non-natural questions, for the simple reason that speakers of different languages are likely to be interested in different topics. This is different from the \"geographically dependent\" questions that MKQA tries to avoid. Take the question \"who starred in the movie bridge over the river kwai\" as an example: While it does not involve any geographical preference, the question is probably less likely to be asked in, say, Swahili, compared to in English or Thai. Second, the builders of MKQA explicitly made the decision to create \"retrievalindependent answer annotations\" that are linked to Wikidata entities and a few other value types. This decision, we feel, restricts the range of natural language questions that are covered. The crosslingual aspect of the dataset appears to be primarily limited to entity translations, which likely do not cover a wide range of linguistic phenomena (which is the reason that we are interested in typologically diverse languages to begin with). Thus, we believe that Mr. TYDI fills a gap in the evaluation space that is currently not occupied.", "cite_spans": [ { "start": 53, "end": 75, "text": "(Longpre et al., 2020)", "ref_id": "BIBREF16" }, { "start": 578, "end": 597, "text": "Clark et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Many multi-lingual (both mono-lingual and cross-lingual) information retrieval and question answering datasets have been constructed over the past decades, via community-wide evaluations at TREC, FIRE, CLEF, and NCTIR. These test collections are typically built on newswire articles, although some evaluations use Wikipedia and scientific texts. While no doubt useful for evaluation, these test collections usually comprise only a small number of queries (at most a few dozen) with relevance judgments, which are insufficient to finetune dense retrieval models. Furthermore, whereas TYDI at least draws from comparable corpora (i.e., Wikipedia articles), these test collections are built on corpora from much more diverse sources. This makes it difficult to generalize across different languages. For these reasons, the above-mentioned IR and QA test collections are not suitable for tackling the research questions we are interested in.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background and Related Work", "sec_num": "2" }, { "text": "Having justified the need for a new benchmark dataset, this section describes the construction of Mr. TYDI, which can be best described as an openretrieval extension to TYDI.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "Corpus The formulation of any text ranking problem begins with a corpus C = {d i } comprising the units of text to be retrieved. As the starting point, we used exactly the same raw Wikipedia dumps as TYDI. Relevance annotations in TYDI are provided at the passage level (in the passage selection task), and thus we kept the same level of granularity in our corpus preparation. For articles covered by TYDI (identified by the article titles), we retained the original passages. For articles that are not covered by TYDI, we prepared passages using Wiki-Extractor 1 based on natural discourse units (e.g., two consecutive newlines in the wiki markup). Unfortunately, Clark et al. 2020did not precisely document their passage segmentation method, but based on manual examination of our results, the generated passages appear qualitatively similar to the TYDI passages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "The result of the corpus preparation process is, for each language, a collection of passages from the Wikipedia articles in that language. To form the final passages that comprise the basic unit of retrieval, we prepend the title of the Wikipedia article to each passage. This creates retrieval units that can be more readily understood in isolation. 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "Task While Mr. TYDI is adapted from a QA dataset, our task is mono-lingual ad hoc retrieval. That is, given a question in language L, the task is to retrieve a ranked list of passages from C L , the Wikipedia collection in the same language (prepared in the manner described above), where the retrieved passages are ranked according to their relevance to the given question.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "Our assumption here is a standard \"retriever-1 https://github.com/attardi/ wikiextractor 2 Mr. TYDI v1.1 contains these article titles, whereas v1.0 did not. Results reported in this paper are with v1.1; for differences, please refer to the earlier version of our paper posted on arXiv.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "reader\" framework (Chen et al., 2017) or a multistage ranking architecture , where we focus on the retriever (what IR researchers call candidate generation or first-stage retrieval). For end-to-end question answering, the output of the retriever would be fed to a reader for answer extraction. This focus on retrieval allows us to explore the research questions outlined in the introduction, and this formulation is consistent with previous work in dense retrieval, e.g., Karpukhin et al. (2020) .", "cite_spans": [ { "start": 18, "end": 37, "text": "(Chen et al., 2017)", "ref_id": "BIBREF2" }, { "start": 472, "end": 495, "text": "Karpukhin et al. (2020)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "Questions and Judgments To prepare the questions, we started with all questions provided by TYDI and removed those without any answer passages or whose answer passages are all empty. We consider all non-empty annotated answer passages from TYDI as relevant to the corresponding question in Mr. TYDI. We adopt the development set of TYDI as our test set, since the original test data are not public. A new development set was created by randomly sampling 20% of questions from the original training set. We observed that some of the questions in TYDI are shared between the training and development set (but labeled with different answer passages). In these cases, we retained the duplicate questions only in the training set. Descriptive statistics for Mr. TYDI are shown in Table 1, where languages are identified by their two letter ISO-639 language codes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "In summary, relevant passages in Mr. TYDI are imputed from TYDI. Since Clark et al. (2020) only asked annotators to assess the top-ranked article for each question, there are likely relevant passages that have not been identified. Following standard assumptions in information retrieval, unjudged passages are considered non-relevant. Thus, it is likely that ranking models will retrieve false negatives, i.e., passages that are relevant, but would not be properly rewarded.", "cite_spans": [ { "start": 71, "end": 90, "text": "Clark et al. (2020)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "In other words, our judgments are far from exhaustive. This might be a cause for concern, but is a generally accepted practice in IR research due to the challenges of gathering complete judgments. The widely used MS MARCO datasets (Bajaj et al., 2018), for example, share this characteristic of having \"sparse judgments\". No claim is made about the exhaustiveness of the annotations, as both Mr. TYDI and MS MARCO provide only about one good answer per question. From a methodological perspective, findings based on MS MARCO \"sparse judgments\" are largely consistent with results from more expensive evaluation efforts (to gather more complete judgments), such as the TREC Deep Learning Tracks (Craswell et al., 2020 (Craswell et al., , 2021 . We expect a similar parallel here: more exhaustive judgments will change the absolute scores, but will likely not affect the findings qualitatively.", "cite_spans": [ { "start": 694, "end": 716, "text": "(Craswell et al., 2020", "ref_id": "BIBREF5" }, { "start": 717, "end": 741, "text": "(Craswell et al., , 2021", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "Metrics We evaluate results in terms of reciprocal rank and recall at a depth k of 100 hits. The first metric quantifies the ability of a model to generate a good ranking, while the second metric provides an upper bound on end-to-end effectiveness (e.g., when retrieval results are fed to a reader for answer extraction). The setting of k = 100 is consistent with work in the QA literature.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mr. TYDI", "sec_num": "3" }, { "text": "We provide a few \"obvious\" baselines for Mr. TYDI as a starting point for future research:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4" }, { "text": "BM25 We report results with bag-of-words BM25, a strong traditional IR baseline, with the implementation provided by Pyserini (Yang et al., 2017; Lin et al., 2021a) , which is built on the open-source Lucene search library. Lucene provides language-specific analyzers for nine of the eleven languages in Mr. TYDI; for these languages, we used the Lucene implementations. For Telugu (Te) and Swahili (Sw), since Lucene does not provide any language-specific implementations, we simply used its whitespace analyzer. We report BM25 scores on two conditions, with default and tuned k 1 and b parameters; the default settings are k 1 = 0.9, b = 0.4. Tuning was performed on the development set, on a per-language basis, via grid search on k 1 \u2208 [0.1, 1.6] and b \u2208 [0.1, 1.0], with step size 0.1, optimizing MRR@100.", "cite_spans": [ { "start": 126, "end": 145, "text": "(Yang et al., 2017;", "ref_id": "BIBREF23" }, { "start": 146, "end": 164, "text": "Lin et al., 2021a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4" }, { "text": "mDPR Dense passage retriever (DPR) by Karpukhin et al. (2020) is a well-known bi-encoder model for open-domain QA that we adapt to mono-lingual retrieval in non-English languages by simply replacing BERT with multi-lingual BERT (mBERT), 3 but otherwise keeping all other aspects of the training procedure identical. This adaptation, which we call mDPR, was trained on the English QA dataset Natural Questions (Kwiatkowski et al., 2019) using Facebook's open-source codebase.", "cite_spans": [ { "start": 38, "end": 61, "text": "Karpukhin et al. (2020)", "ref_id": "BIBREF10" }, { "start": 409, "end": 435, "text": "(Kwiatkowski et al., 2019)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4" }, { "text": "Our retrieval experiments with mDPR can be characterized as zero shot: We applied the same mBERT document encoder to convert passages from all eleven languages into dense vectors; similarly, we applied the same mBERT question encoder to all questions. Retrieval in each language was performed using Facebook's Faiss library for nearest neighbor search (Johnson et al., 2017) ; we used the FlatIP indexes. Experiments were conducted using the same codebase as the DPR replication experiments of Ma et al. (2021) , with the Pyserini toolkit (Lin et al., 2021a) .", "cite_spans": [ { "start": 352, "end": 374, "text": "(Johnson et al., 2017)", "ref_id": "BIBREF9" }, { "start": 494, "end": 510, "text": "Ma et al. (2021)", "ref_id": "BIBREF17" }, { "start": 539, "end": 558, "text": "(Lin et al., 2021a)", "ref_id": "BIBREF13" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4" }, { "text": "Our choice of zero-shot mDPR as a baseline deserves some discussion. At a high level, we are interested in the generalizability of dense retrieval techniques in out-of-distribution settings (in this case, primarily different languages). Operationally, our experimental setup captures the scenario where the model does not benefit from any exposure to the target task, even (question, relevant passage) pairs in the English portion of Mr. TYDI. This makes the comparison \"fair\" to BM25, which is similarly not provided any labeled data from the target task (in the case with default parameters).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4" }, { "text": "Sparse-Dense Hybrid Our hybrid technique combines the scores of sparse (BM25) and dense (mDPR) retrieval results. The final fusion score of each document is calculated by s sparse + \u03b1 \u2022 s dense , where s sparse and s dense represent the scores from sparse and dense retrieval, respectively. This strategy is similar to the one described by Ma et al. (2021) . We take 1000 hits from mDPR and 1000 hits from BM25 and normalize the scores from each into [0, 1] since the range of the two types of scores otherwise are quite different. If one hit isn't found in the other, the normalized score for that hit is set to zero. The weight \u03b1 was tuned in [0, 1] with a simple line search on the development set by optimizing MRR@100 with step size 0.01. ", "cite_spans": [ { "start": 340, "end": 356, "text": "Ma et al. (2021)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4" }, { "text": "We performed experiments on Mr. TYDI v1.1, where each passage contains the title of the Wikipedia article and the passage text. Table 2 reports results on the test set across all eleven languages; mean reciprocal rank (MRR) in the top table and recall in the bottom table, both at a cutoff of 100 hits; the final column reports the average across all languages. The rows report BM25 results (default and tuned), followed by results of mDPR and the sparse-dense hybrid. For the hybrid method, statistically significant improvements over tuned BM25 are denoted with the symbol \u2020 based on paired t-tests (p < 0.01).", "cite_spans": [], "ref_spans": [ { "start": 128, "end": 135, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Results and Analysis", "sec_num": "5" }, { "text": "By comparing scores in each column, we observe that the absolute effectiveness of the techniques varies greatly across languages. Absolute scores are difficult to compare because both the questions and the underlying corpora are different. However, three high-level findings emerge: First, we find that tuning BM25 parameters yields at most minor improvements for most languages, both in terms of MRR@100 and recall except for Telugu (cases where scores decrease slightly can be explained by noise in the training/test splits). This is a bit of a surprise, as parameter tuning usually yields larger overall gains, e.g., in the MS MARCO collections (Bajaj et al., 2018) . Regardless, tuned BM25 serves as a competitive baseline for the remainder of our experiments.", "cite_spans": [ { "start": 648, "end": 668, "text": "(Bajaj et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "High-Level Findings", "sec_num": "5.1" }, { "text": "Second, we notice that mDPR underperforms BM25 across all languages except for English. That is, in a zero-shot setting, retrieval using learned dense representations from mDPR (fine-tuned with NQ) is a lot worse than just retrieval using BM25based representations. Clearly, mDPR is far less robust in cross-lingual generalizations. Even within the same language, mDPR seems to be sensitive to characteristics of the training data. Effectiveness on the English portion of Mr. TYDI is only slightly better than BM25, likely arising from the fact that we are applying an NQ-trained model on \"out-ofdistribution\" questions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "High-Level Findings", "sec_num": "5.1" }, { "text": "Based on Karpukhin et al. (2020) and the experiments by Ma et al. (2021) , we would have expected mDPR to beat BM25 for in-distribution training and inference. Since NQ is also based on Wikipedia, corpus differences are less likely an issue; these results suggest that questions in TYDI and NQ are qualitatively different.", "cite_spans": [ { "start": 9, "end": 32, "text": "Karpukhin et al. (2020)", "ref_id": "BIBREF10" }, { "start": 56, "end": 72, "text": "Ma et al. (2021)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "High-Level Findings", "sec_num": "5.1" }, { "text": "Third, despite the fact that mDPR effectiveness is quite a bit worse than BM25, the MRR@100 of the sparse-dense hybrid is significantly higher than tuned BM25 for nine of the eleven languages (the exceptions are Swahili and Telugu). Rephrased differently, this means that although mDPR by itself is a poor dense retrieval model in a zero-shot setting, it nevertheless contributes valuable relevance signals that are able to improve over tuned BM25. On average, the hybrid results are around eight and five points absolute higher than tuned BM25 in terms of MRR@100 and recall, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "High-Level Findings", "sec_num": "5.1" }, { "text": "Because absolute scores vary widely across languages, it is helpful to normalize the effectiveness of tuned BM25 to 1.0 and scale the effectiveness of mDPR and the hybrid approach appropriately; this is shown in Figure 2 ( an example, from the leftmost bars, we see that the MRR@100 of mDPR in Arabic is 71% of BM25 but the sparse-dense hybrid improves over BM25 by 34% (stat. sig). We additionally plot the relation between the normalized effectiveness of mDPR and the hybrid approach in Figure 2 (right) . This plot shows a clear positive (linear) correlation, that is, better mDPR (relative) effectiveness translates into bigger improvements over BM25 in the sparse-dense hybrid. What is surprising, though, is that this relationship seems to hold even if the mDPR results are poor. For example, in Thai, the MRR@100 of mDPR is only 32% of BM25 (tuned), yet the hybrid yields a statistically significant 18% relative gain in the hybrid approach. However, there appear to be limits to our simple linear combination of relevance signals: for both Swahili and Telugu, the hybrid approach does not outperform tuned BM25, likely because mDPR effectiveness is too poor.", "cite_spans": [], "ref_spans": [ { "start": 212, "end": 222, "text": "Figure 2 (", "ref_id": "FIGREF1" }, { "start": 489, "end": 505, "text": "Figure 2 (right)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "High-Level Findings", "sec_num": "5.1" }, { "text": "To provide a more in-depth analysis, we attempt to untangle effectiveness into two separate components: (1) retrieving a relevant passage and (2) placing the relevant passages into top ranks. The recall figures in Table 2 already quantify the first component, but MRR@100 alone does not tell the complete story for the second component, since the metric averages a bunch of zeros for questions where relevant passages do not appear in the top-100 hits. It could be the case, for example, that mDPR provides a good ranking for those queries where it retrieves a relevant result in the top 100.", "cite_spans": [], "ref_spans": [ { "start": 214, "end": 221, "text": "Table 2", "ref_id": "TABREF4" } ], "eq_spans": [], "section": "Components of Effectiveness", "sec_num": "5.2" }, { "text": "The results of such an analysis, comparing BM25 (tuned) and mDPR, are shown in Figure 3 for all languages (ordered alphabetically). Each plot consists of a histogram and a line graph. The histogram captures the distribution of the ranks (binned by ten) where the relevant passage appears for each question. 4 Questions for which no relevant passage was found in the top-100 hits are tallied in the rightmost bar (\"Not Found\"). Thus, all questions are either in the rightmost bar (not found in the top-100 hits) or in one of the top-100 bins; these are exactly the components of recall, so the histograms are a more fine-grained way to visualize recall.", "cite_spans": [ { "start": 307, "end": 308, "text": "4", "ref_id": null } ], "ref_spans": [ { "start": 79, "end": 87, "text": "Figure 3", "ref_id": null } ], "eq_spans": [], "section": "Components of Effectiveness", "sec_num": "5.2" }, { "text": "The superimposed line graphs in each plot show the ratio of the number of questions falling in each bin to the total number of questions in all top-100 bins (that is, we remove the \"Not Found\" bin and renormalize). These plots answer the following question: Given that the relevant passage appeared in the top-100 hits, how well did the model perform at ranking it? In other words, we have isolated the ranking ability of the model. Looking only at the line graphs, these results tell us that for Arabic, Japanese, and Korean, BM25 and mDPR are comparable when we focus only on ranking-that is, given that the relevant passage appears in the top-100 hits. In other words, MRR@100 differences for these languages come mostly from the fact that mDPR misses many relevant passages that BM25 finds (i.e., exhibits lower recall). For the other languages, BM25 appears to exhibit both better recall and better ranking. Consider Swahili, for example, BM25 places many more relevant passages in the top 10 and also has far fewer questions where no correct answer appears in the top-100 hits. Thus, this analysis isolates the different failure modes of mDPR (dense retrieval) relative to BM25 (sparse retrieval). The same analysis comparing BM25 (tuned) and the sparse-dense hybrid is shown in Figure 4 . These plots reveal how the hybrid is improving the BM25 results. We see that gains in Bengali, Indonesian, Swahili, and Telugu come mostly from higher recall. That is, ranking capabilities are roughly comparable (the line plots largely overlap) but the hybrid approach has fewer queries where the relevant passage does not appear in the top-100 hits. For Thai, the gain comes from better ranking, while recall is just a small bit better (the \"Not Found\" bars are pretty close). For the other languages, hybrid improves both recall and ranking.", "cite_spans": [], "ref_spans": [ { "start": 1285, "end": 1293, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Components of Effectiveness", "sec_num": "5.2" }, { "text": "Mr. TYDI provides a resource to begin exploring mono-lingual ad hoc retrieval with both dense and sparse retrieval techniques. In this paper, we have focused primarily on zero-shot baselines. Although zero-shot dense retrieval (mDPR) does not appear to be effective by itself, relevance signals from the model do appear to be complementary to sparse retrieval (bag-of-words BM25). We have identified how they are complementary (better recall vs. better ranking), but the behavior varies across languages, and we do not yet have an explanation for why; for example, do the typological characteristics of the language play a role?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6" }, { "text": "For our experiments, we have decided to focus on zero-shot effectiveness because it serves as the natural baseline of any technique that tries more sophisticated approaches. Thus, the baselines here are foundational to any future work. We have explicitly decided not to report any language-specific fine-tuning results here, although preliminary experiments suggest that such techniques do bring about benefits. We have not yet systematically explored the broad design space of what calls \"multi-step fine-tuning strategies\", paralleling the explorations of Shi et al. (2020) in the context of transformer-based reranking models. There are many possible variations, for example, how many languages to use, what order to sequence data from different languages, possible data augmentation using machine translation, complementary data from other tasks, etc. There are a number of experiments that will allow us to tease apart the effects of language versus other aspects of training data distribution (e.g., NQ vs. TYDI). Exploration of this vast design space is the focus of our immediate future work.", "cite_spans": [ { "start": 558, "end": 575, "text": "Shi et al. (2020)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6" }, { "text": "In addition, we believe that our dataset can provide a probe to examine the nature of multi-lingual transformer models. Our experimental results show that absolute effectiveness varies quite a bit across languages. Some of these variations may be due to the nature of the queries, the size of the corpora, etc. However, we hypothesize that inherent properties of the transformer model play important roles as well, e.g., the size of the pretraining corpus in each language, typological and other innate characteristics of the languages, etc. We hope that Mr. TYDI can help us untangle some of these issues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Future Work", "sec_num": "6" }, { "text": "In this work, we introduce Mr. TYDI, a multilingual benchmark dataset for mono-lingual retrieval in eleven typologically diverse languages, built on TYDI. We describe zero-shot experiments using BM25, mDPR, and a sparse-dense hybrid. The experimental results are not surprising: as is already known from complementary experiments, dense retrieval techniques do not generalize well to out-of-distribution input. However, we find that even poor dense retrieval results provide valuable relevance signals in a sparse-dense hybrid.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Of course, this is only the starting point. With Mr. TYDI, we now have a resource to explore our motivating research questions regarding the behavior of dense retrieval models when fed \"out of distribution\" data, and from there, devise techniques to increase the robustness and generalizability of our techniques. The potential broader impact of this work is more equitable distribution of information access capabilities across diverse languages of the world-to help non-English speakers access relevant information in their own languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "Specifically, the bert-base-multilingual-cased model provided by HuggingFace(Wolf et al., 2020).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "If the question has multiple retrieved relevant passages, we only consider the smallest rank among them (i.e., the highest ranked relevant passage).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This research was supported in part by the Canada First Research Excellence Fund and the Natural Sciences and Engineering Research Council (NSERC) of Canada; computational resources were provided by Compute Ontario and Compute Canada.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "XOR QA: Cross-lingual open-retrieval question answering", "authors": [ { "first": "Akari", "middle": [], "last": "Asai", "suffix": "" }, { "first": "Jungo", "middle": [], "last": "Kasai", "suffix": "" }, { "first": "Jonathan", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "547--564", "other_ids": {}, "num": null, "urls": [], "raw_text": "Akari Asai, Jungo Kasai, Jonathan Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021. XOR QA: Cross-lingual open-retrieval question answer- ing. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 547-564.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Saurabh Tiwary, and Tong Wang", "authors": [ { "first": "Payal", "middle": [], "last": "Bajaj", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Campos", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Craswell", "suffix": "" }, { "first": "Li", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Rangan", "middle": [], "last": "Majumder", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mcnamara", "suffix": "" }, { "first": "Bhaskar", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Tri", "middle": [], "last": "Nguyen", "suffix": "" }, { "first": "Mir", "middle": [], "last": "Rosenberg", "suffix": "" }, { "first": "Xia", "middle": [], "last": "Song", "suffix": "" }, { "first": "Alina", "middle": [], "last": "Stoica", "suffix": "" } ], "year": 2018, "venue": "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1611.09268v3" ] }, "num": null, "urls": [], "raw_text": "Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Ti- wary, and Tong Wang. 2018. MS MARCO: A Hu- man Generated MAchine Reading COmprehension Dataset. arXiv:1611.09268v3.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Reading Wikipedia to answer opendomain questions", "authors": [ { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Antoine", "middle": [], "last": "Bordes", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1870--1879", "other_ids": {}, "num": null, "urls": [], "raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879, Vancouver, Canada.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages", "authors": [ { "first": "Jonathan", "middle": [ "H" ], "last": "Clark", "suffix": "" }, { "first": "Eunsol", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Garrette", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Vitaly", "middle": [], "last": "Nikolaev", "suffix": "" }, { "first": "Jennimaria", "middle": [], "last": "Palomaki", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "454--470", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A bench- mark for information-seeking question answering in typologically diverse languages. Transactions of the Association for Computational Linguistics, 8:454- 470.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Overview of the TREC 2020 deep learning track", "authors": [ { "first": "Nick", "middle": [], "last": "Craswell", "suffix": "" }, { "first": "Bhaskar", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Emine", "middle": [], "last": "Yilmaz", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Campos", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2102.07662" ] }, "num": null, "urls": [], "raw_text": "Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv:2102.07662.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Overview of the TREC 2019 deep learning track", "authors": [ { "first": "Nick", "middle": [], "last": "Craswell", "suffix": "" }, { "first": "Bhaskar", "middle": [], "last": "Mitra", "suffix": "" }, { "first": "Emine", "middle": [], "last": "Yilmaz", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Campos", "suffix": "" }, { "first": "Ellen", "middle": [ "M" ], "last": "Voorhees", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2003.07820" ] }, "num": null, "urls": [], "raw_text": "Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv:2003.07820.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Complementing lexical retrieval with semantic residual embedding", "authors": [ { "first": "Luyu", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Zhuyun", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Zhen", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Jamie", "middle": [], "last": "Callan", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.13969" ] }, "num": null, "urls": [], "raw_text": "Luyu Gao, Zhuyun Dai, Zhen Fan, and Jamie Callan. 2020. Complementing lexical retrieval with seman- tic residual embedding. arXiv:2004.13969.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Improving efficient neural ranking models with cross-architecture knowledge distillation", "authors": [ { "first": "Sebastian", "middle": [], "last": "Hofst\u00e4tter", "suffix": "" }, { "first": "Sophia", "middle": [], "last": "Althammer", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Schr\u00f6der", "suffix": "" }, { "first": "Mete", "middle": [], "last": "Sertkan", "suffix": "" }, { "first": "Allan", "middle": [], "last": "Hanbury", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.02666" ] }, "num": null, "urls": [], "raw_text": "Sebastian Hofst\u00e4tter, Sophia Althammer, Michael Schr\u00f6der, Mete Sertkan, and Allan Hanbury. 2020. Improving efficient neural ranking mod- els with cross-architecture knowledge distillation. arXiv:2010.02666.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Efficiently teaching an effective dense retriever with balanced topic aware sampling", "authors": [ { "first": "Sebastian", "middle": [], "last": "Hofst\u00e4tter", "suffix": "" }, { "first": "Sheng-Chieh", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jheng-Hong", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Allan", "middle": [], "last": "Hanbury", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)", "volume": "", "issue": "", "pages": "113--122", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sebastian Hofst\u00e4tter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021. Ef- ficiently teaching an effective dense retriever with balanced topic aware sampling. In Proceedings of the 44th Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval (SIGIR 2021), pages 113-122.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Billion-scale similarity search with GPUs", "authors": [ { "first": "Jeff", "middle": [], "last": "Johnson", "suffix": "" }, { "first": "Matthijs", "middle": [], "last": "Douze", "suffix": "" }, { "first": "Herv\u00e9", "middle": [], "last": "J\u00e9gou", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1702.08734" ] }, "num": null, "urls": [], "raw_text": "Jeff Johnson, Matthijs Douze, and Herv\u00e9 J\u00e9gou. 2017. Billion-scale similarity search with GPUs. arXiv:1702.08734.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Dense passage retrieval for open-domain question answering", "authors": [ { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6769--6781", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "ColBERT: Efficient and effective passage search via contextualized late interaction over BERT", "authors": [ { "first": "Omar", "middle": [], "last": "Khattab", "suffix": "" }, { "first": "Matei", "middle": [], "last": "Zaharia", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 43rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2020)", "volume": "", "issue": "", "pages": "39--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Omar Khattab and Matei Zaharia. 2020. ColBERT: Ef- ficient and effective passage search via contextual- ized late interaction over BERT. In Proceedings of the 43rd Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval (SIGIR 2020), pages 39-48.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Natural Questions: A benchmark for question answering research", "authors": [ { "first": "Tom", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "Jennimaria", "middle": [], "last": "Palomaki", "suffix": "" }, { "first": "Olivia", "middle": [], "last": "Redfield", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Collins", "suffix": "" }, { "first": "Ankur", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Alberti", "suffix": "" }, { "first": "Danielle", "middle": [], "last": "Epstein", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Kelcey", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Andrew", "middle": [ "M" ], "last": "Dai", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Slav", "middle": [], "last": "Petrov", "suffix": "" } ], "year": 2019, "venue": "Transactions of the Association for Computational Linguistics", "volume": "7", "issue": "", "pages": "452--466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Al- berti, Danielle Epstein, Illia Polosukhin, Jacob De- vlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A benchmark for question an- swering research. Transactions of the Association for Computational Linguistics, 7:452-466.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations", "authors": [ { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Xueguang", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Sheng-Chieh", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jheng-Hong", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Ronak", "middle": [], "last": "Pradeep", "suffix": "" }, { "first": "Rodrigo", "middle": [], "last": "Nogueira", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021)", "volume": "", "issue": "", "pages": "2356--2362", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng- Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021a. Pyserini: A Python toolkit for reproducible information retrieval research with sparse and dense representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021), pages 2356-2362.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Pretrained transformers for text ranking: BERT and beyond", "authors": [ { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Rodrigo", "middle": [], "last": "Nogueira", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Yates", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2010.06467" ] }, "num": null, "urls": [], "raw_text": "Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2020. Pretrained transformers for text ranking: BERT and beyond. arXiv:2010.06467.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "In-batch negatives for knowledge distillation with tightly-coupled teachers for dense retrieval", "authors": [ { "first": "Jheng-Hong", "middle": [], "last": "Sheng-Chieh Lin", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Yang", "suffix": "" }, { "first": "", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 6th Workshop on Representation Learning for NLP", "volume": "", "issue": "", "pages": "163--173", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. 2021b. In-batch negatives for knowledge distilla- tion with tightly-coupled teachers for dense retrieval. In Proceedings of the 6th Workshop on Represen- tation Learning for NLP (RepL4NLP-2021), pages 163-173.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "MKQA: A linguistically diverse benchmark for multilingual open domain question answering", "authors": [ { "first": "Shayne", "middle": [], "last": "Longpre", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Joachim", "middle": [], "last": "Daiber", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2007.15207" ] }, "num": null, "urls": [], "raw_text": "Shayne Longpre, Yi Lu, and Joachim Daiber. 2020. MKQA: A linguistically diverse benchmark for multilingual open domain question answering. arXiv:2007.15207.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "A replication study of dense passage retriever", "authors": [ { "first": "Xueguang", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Kai", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ronak", "middle": [], "last": "Pradeep", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.05740" ] }, "num": null, "urls": [], "raw_text": "Xueguang Ma, Kai Sun, Ronak Pradeep, and Jimmy Lin. 2021. A replication study of dense passage re- triever. arXiv:2104.05740.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "SQuAD: 100,000+ questions for machine comprehension of text", "authors": [ { "first": "Pranav", "middle": [], "last": "Rajpurkar", "suffix": "" }, { "first": "Jian", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Konstantin", "middle": [], "last": "Lopyrev", "suffix": "" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "2383--2392", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 2383-2392, Austin, Texas.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Cross-lingual training of neural models for document ranking", "authors": [ { "first": "Peng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "He", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2768--2773", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peng Shi, He Bai, and Jimmy Lin. 2020. Cross-lingual training of neural models for document ranking. In Findings of the Association for Computational Lin- guistics: EMNLP 2020, pages 2768-2773.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "BEIR: A heterogenous benchmark for zero-shot evaluation of information retrieval models", "authors": [ { "first": "Nandan", "middle": [], "last": "Thakur", "suffix": "" }, { "first": "Nils", "middle": [], "last": "Reimers", "suffix": "" }, { "first": "Andreas", "middle": [], "last": "R\u00fcckl\u00e9", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2104.08663" ] }, "num": null, "urls": [], "raw_text": "Nandan Thakur, Nils Reimers, Andreas R\u00fcckl\u00e9, Ab- hishek Srivastava, and Iryna Gurevych. 2021. BEIR: A heterogenous benchmark for zero-shot evaluation of information retrieval models. arXiv:2104.08663.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Transformers: State-of-the-art natural language processing", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Lysandre", "middle": [], "last": "Debut", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" }, { "first": "Anthony", "middle": [], "last": "Moi", "suffix": "" }, { "first": "Pierric", "middle": [], "last": "Cistac", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Rault", "suffix": "" }, { "first": "Remi", "middle": [], "last": "Louf", "suffix": "" }, { "first": "Morgan", "middle": [], "last": "Funtowicz", "suffix": "" }, { "first": "Joe", "middle": [], "last": "Davison", "suffix": "" }, { "first": "Sam", "middle": [], "last": "Shleifer", "suffix": "" }, { "first": "Clara", "middle": [], "last": "Patrick Von Platen", "suffix": "" }, { "first": "Yacine", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Jernite", "suffix": "" }, { "first": "Canwen", "middle": [], "last": "Plu", "suffix": "" }, { "first": "Teven", "middle": [ "Le" ], "last": "Xu", "suffix": "" }, { "first": "Sylvain", "middle": [], "last": "Scao", "suffix": "" }, { "first": "Mariama", "middle": [], "last": "Gugger", "suffix": "" }, { "first": "Quentin", "middle": [], "last": "Drame", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Lhoest", "suffix": "" }, { "first": "", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations", "volume": "", "issue": "", "pages": "38--45", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language process- ing. In Proceedings of the 2020 Conference on Em- pirical Methods in Natural Language Processing: System Demonstrations, pages 38-45.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Approximate nearest neighbor negative contrastive learning for dense text retrieval", "authors": [ { "first": "Lee", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Chenyan", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Ye", "middle": [], "last": "Li", "suffix": "" }, { "first": "Kwok-Fung", "middle": [], "last": "Tang", "suffix": "" }, { "first": "Jialin", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Paul", "middle": [ "N" ], "last": "Bennett", "suffix": "" }, { "first": "Junaid", "middle": [], "last": "Ahmed", "suffix": "" }, { "first": "Arnold", "middle": [], "last": "Overwijk", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 9th International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate nearest neigh- bor negative contrastive learning for dense text re- trieval. In Proceedings of the 9th International Con- ference on Learning Representations (ICLR 2021).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Anserini: Enabling the use of Lucene for information retrieval research", "authors": [ { "first": "Peilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 40th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2017)", "volume": "", "issue": "", "pages": "1253--1256", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peilin Yang, Hui Fang, and Jimmy Lin. 2017. Anserini: Enabling the use of Lucene for information retrieval research. In Proceedings of the 40th Annual Inter- national ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2017), pages 1253-1256, Tokyo, Japan.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Comparison between TYDI, XOR-TYDI, and Mr. TYDI with an example in Telugu. The green blocks indicate relevant passages and the red blocks indicate non-relevant passages.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "MRR@100 of mDPR and the sparse-dense hybrid normalized with respect to BM25 for each language (left); corresponding pairs for each language plotted as a scatter plot (right).", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "Analysis of recall and ranking effectiveness comparing BM25 (tuned) and mDPR. In each plot, the histogram shows the distribution of relevant passages; lines plot the distribution of relevant passages normalized to only questions where a relevant passage appears in the top-100 hits. Analysis of recall and ranking effectiveness comparing BM25 (tuned) and the sparse-dense hybrid. Each plot is constructed in a similar manner as the plots inFigure 3.", "num": null, "type_str": "figure" }, "TABREF0": { "content": "
relevant passage(s)relevant passage(s)
A given list of irrelevant passages\u2026\u0c09\u0c2a\u0c17/ 0]\\ 2014 \u0c05M\u00d9 \u0c2c\u0c303 16, \u0c17P\u0c303V\u0c3e\u0c30\u0c02 X[\u0c21c de\u0c30f\u0c2f M\u0c3e\u0c32\u0c2eg\u0c28 \u0c2a9 M\u0c3e\u0c30\u0c02 01:32\u0c17\u0c02\u0c1f\u0c32\u0c15O \u0c2a9 :Fj \u0c02k, Z\u0c1c\u0c2f \u0c35\u0c02\u0c24\u0c02F\u0c3e \u0c15mn\u0c32?MN \u0c2a\u0c02o\u0c3e\u0c303\u2026 (\u2026The satellite was launched on Thursday, October 16, 2014 at 01:32 Indian time and was successfullyOther passages from Wikipedia (English)\u2026Coral reefs are accumulated from the calcareous exoskeletons of marine invertebrates of the order Scleractinia\u2026
launched into orbit\u2026)
TYDI
Other passages fromXOR-TYDI (Retrieve)
Wikipedia (Telugu)
Mr. TYDI
", "text": "te.wikipedia en.wikipedia ... \u0c09\u0c2a\u0c17/ \u0c39\u0c2eP\u0c32 \u0c15\u0c02\u0c2a\u0c28\u0c2eP\u0c32\u0c28\u0c41 \u0c2c\u0c1fT U V\u0c3e\u0c1fT \u0c2c\u0c303\u0c355\u0c32\u0c28\u0c41 \u0c15\u0c28\u0c41FWXY Z\u0c27\u0c2eP\u0c28\u0c41 \u0c15\u0c28\u0c41FWX[\\\u0c303\u2026 ... found a way to find their weights depending on the vibrations of the satellites.", "num": null, "type_str": "table", "html": null }, "TABREF2": { "content": "", "text": "Descriptive statistics for Mr. TYDI: the number of questions (# Q), judgments (# J), and the number of passages (Corpus Size) in each language.", "num": null, "type_str": "table", "html": null }, "TABREF3": { "content": "
ArBnEnFiIdJaKoRuSwTeThAvg
BM25 (default) 0.368 0.418 0.140 0.284 0.376 0.211 0.285 0.313 0.389 0.343 0.401 0.321
BM25 (tuned)0.367 0.413 0.151 0.288 0.382 0.217 0.281 0.329 0.396 0.424 0.417 0.333
mDPR0.260 0(a) MRR@100
ArBnEnFiIdJaKoRuSwTeThAvg
BM25 (default) 0.793 0.869 0.537 0.719 0.843 0.645 0.619 0.648 0.764 0.758 0.853 0.732
BM25 (tuned)0.800 0.874 0.551 0.725 0.846 0.656 0.797 0.660 0.764 0.813 0.853 0.758
mDPR0.620 0.671 0.475 0.375 0.466 0.535 0.490 0.498 0.264 0.352 0.455 0.473
hybrid0.863 \u2020 0.937 0.696 \u2020 0.788 \u2020 0.887 \u2020 0.778 \u2020 0.706 \u2020 0.760 \u2020 0.786 0.827 0.875 \u2020 0.809
(b) Recall@100
", "text": ".258 0.162 0.113 0.146 0.181 0.219 0.185 0.073 0.106 0.135 0.167 hybrid 0.491 \u2020 0.535 \u2020 0.284 \u2020 0.365 \u2020 0.455 \u2020 0.355 \u2020 0.362 \u2020 0.427 \u2020 0.405 0.420 0.492 \u2020 0.417", "num": null, "type_str": "table", "html": null }, "TABREF4": { "content": "", "text": "Results of BM25 (with default and tuned parameters), mDPR, and the sparse-dense hybrid on the test set of Mr. TYDI. The symbol \u2020 indicates significant improvements over BM25 (tuned) (paired t-test, p < 0.01).", "num": null, "type_str": "table", "html": null }, "TABREF5": { "content": "
MRR@100 normalized to BM25 (tuned)0.00 0.25 0.50 0.75 1.00 1.25 1.50 1.75Ar Bn En Fi Id Ja Ko Ru Sw Te Th 0.71 1.34 0.62 1.30 1.07 1.88 0.39 1.27 0.38 1.19 0.83 1.64 0.78 1.29 0.56 1.30 0.18 1.02 0.25 0.99 0.32 1.18 BM25 (tuned) mDPR hybridMRR@100 of sparse-dense hybrid normalized to BM25 (tuned)1.0 1.2 1.4 1.6 1.8 2.00.00.2 MRR@100 of mDPR normalized to BM25 (tuned) 0.4 0.6 0.8 1.0 Sw Te Th Id Fi Ru Bn Ar Ko Ja R 2 = 0.84En
", "text": "left) for MRR@100. As", "num": null, "type_str": "table", "html": null } } } }