ACL-OCL / Base_JSON /prefixM /json /mia /2022.mia-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
171 kB
{
"paper_id": "2022",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:12:24.256905Z"
},
"title": "MIA 2022 Shared Task: Evaluating Cross-lingual Open-Retrieval Question Answering for 16 Diverse Languages",
"authors": [
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Wisconsin",
"location": {}
},
"email": ""
},
{
"first": "Shayne",
"middle": [],
"last": "Longpre",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Wisconsin",
"location": {}
},
"email": ""
},
{
"first": "Chia-Hsuan",
"middle": [],
"last": "Lee",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Wisconsin",
"location": {}
},
"email": ""
},
{
"first": "Rui",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "The University of Texas at Austin",
"location": {}
},
"email": ""
},
{
"first": "Stadio",
"middle": [],
"last": "Ousia",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "\u25ca",
"middle": [],
"last": "Riken",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Google",
"middle": [],
"last": "Research",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We present the results of the Workshop on Multilingual Information Access (MIA) 2022 Shared Task, evaluating cross-lingual openretrieval question answering (QA) systems in 16 typologically diverse languages. In this task, we adapted two large-scale cross-lingual openretrieval QA datasets in 14 typologically diverse languages, and newly annotated openretrieval QA data in 2 underrepresented languages: Tagalog and Tamil. Four teams submitted their systems. The best constrained system uses entity-aware contextualized representations for document retrieval, thereby achieving an average F1 score of 31.6, which is 4.1 F1 absolute higher than the challenging baseline. The best system obtains particularly significant improvements in Tamil (20.8 F1), whereas most of the other systems yield nearly zero scores. The best unconstrained system achieves 32.2 F1, outperforming our baseline by 4.5 points. The official leaderboard 1 and baselines 2 models are publicly available.",
"pdf_parse": {
"paper_id": "2022",
"_pdf_hash": "",
"abstract": [
{
"text": "We present the results of the Workshop on Multilingual Information Access (MIA) 2022 Shared Task, evaluating cross-lingual openretrieval question answering (QA) systems in 16 typologically diverse languages. In this task, we adapted two large-scale cross-lingual openretrieval QA datasets in 14 typologically diverse languages, and newly annotated openretrieval QA data in 2 underrepresented languages: Tagalog and Tamil. Four teams submitted their systems. The best constrained system uses entity-aware contextualized representations for document retrieval, thereby achieving an average F1 score of 31.6, which is 4.1 F1 absolute higher than the challenging baseline. The best system obtains particularly significant improvements in Tamil (20.8 F1), whereas most of the other systems yield nearly zero scores. The best unconstrained system achieves 32.2 F1, outperforming our baseline by 4.5 points. The official leaderboard 1 and baselines 2 models are publicly available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Open-retrieval 3 question answering (QA) is a task of answering questions in diverse domains given large-scale document collections such as Wikipedia . Despite the rapid progress in this area Karpukhin et al., 2020; Lewis et al., 2020b) , the systems have primarily been evaluated in English, yet openretrieval QA in non-English languages has been understudied (Longpre et al., 2021; Asai et al., 2021a) . Moreover, due to the task complexity, cross-lingual open-retrieval QA has unique challenges such as multi-step inference (retrieval and 1 https://eval.ai/web/challenges/ challenge-page/1638/leaderboard 2 https://github.com/mia-workshop/ MIA-Shared-Task-2022",
"cite_spans": [
{
"start": 192,
"end": 215,
"text": "Karpukhin et al., 2020;",
"ref_id": "BIBREF26"
},
{
"start": 216,
"end": 236,
"text": "Lewis et al., 2020b)",
"ref_id": null
},
{
"start": 361,
"end": 383,
"text": "(Longpre et al., 2021;",
"ref_id": "BIBREF33"
},
{
"start": 384,
"end": 403,
"text": "Asai et al., 2021a)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "3 Also sometimes referred to as open-domain QA; we use open-retrieval as it is not ambiguous with the sense of \"covering many domains.\" answer selection) and cross-lingual pattern matching (Lewis et al., 2020a; Sch\u00e4uble and Sheridan, 1997) , whereas other multilingual NLP tasks have their inputs specified at once (e.g. natural language inference) and typically only need to perform inference on one language at a time.",
"cite_spans": [
{
"start": 189,
"end": 210,
"text": "(Lewis et al., 2020a;",
"ref_id": "BIBREF28"
},
{
"start": 211,
"end": 239,
"text": "Sch\u00e4uble and Sheridan, 1997)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we introduce the MIA 2022 shared task on cross-lingual open-retrieval QA, which tests open-retrieval QA systems across typologically diverse languages. Compared to previous efforts on multilingual open-retrieval QA (Forner et al., 2008 (Forner et al., , 2010 , this shared task covers a wider set of languages (i.e., 16 topologically diverse languages) and orders of magnitude more passages in retrieval targets (i.e., 40 million passages in total), and constitutes the first shared task for massive-scale crosslingual open-retrieval QA. Four teams submitted systems, three of which significantly improve the baseline system based on a state-of-the-art multilingual open-retrieval QA system (Asai et al., 2021b) .",
"cite_spans": [
{
"start": 229,
"end": 249,
"text": "(Forner et al., 2008",
"ref_id": "BIBREF19"
},
{
"start": 250,
"end": 272,
"text": "(Forner et al., , 2010",
"ref_id": "BIBREF18"
},
{
"start": 705,
"end": 725,
"text": "(Asai et al., 2021b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our analysis reveals that the system performance varies across languages even when the questions are parallel (as in one of our two settings), and several findings from the submitted systems shed light on the importance on entity-enhanced representations, leveraging more passages and data augmentation for future research in multilingual knowledge-intensive NLP. Our analysis suggests that (i) it is still challenging to retrieve passages cross-lingually, (ii) generating answers in the target language whose script differs from the script of evidence document is nontrivial, (iii) and potential answer overlaps in existing datasets may overestimate models' performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We formally introduce our task in Section 2, followed by data collection process for 16 languages in Section 3. We then introduce our baseline systems in Section 4 and the submitted systems. Section 5 presents our meta analysis of the systems performances, and we conclude by suggesting future improvements in this area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first formulate cross-lingual open-retrieval QA and introduce metrics used to evaluate systems' performance. We then present two submission tracks: constrained and unconstrained tracks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Descriptions",
"sec_num": "2"
},
{
"text": "Cross-lingual open-retrieval QA is a challenging multilingual NLP task, where given questions written in a user's preferred language, a system needs to find evidence from large-scale document collections written in many different languages. The final answer needs to be in the user's preferred language which is indicated by their question, as in real-world applications. We follow the general definition of Asai et al. (2021b) , where a system can retrieve evidence from documents in any languages, not limiting the retrieval target to certain languages as in Forner et al. (2008) . For instance, a system needs to answer in Arabic to an Arabic question, but it can use evidence passages written in any language included in a large-document corpus such as English, German, Japanese and so on. In real-world applications, the issues of information asymmetry and information scarcity (Roy et al., 2022; Blasi et al., 2022; Asai et al., 2021a; Joshi et al., 2020) arise in many languages, hence the need to source answer contents from other languages-yet we often do not know a priori in which language the evidence can be found to answer a question.",
"cite_spans": [
{
"start": 408,
"end": 427,
"text": "Asai et al. (2021b)",
"ref_id": "BIBREF5"
},
{
"start": 561,
"end": 581,
"text": "Forner et al. (2008)",
"ref_id": "BIBREF19"
},
{
"start": 883,
"end": 901,
"text": "(Roy et al., 2022;",
"ref_id": "BIBREF40"
},
{
"start": 902,
"end": 921,
"text": "Blasi et al., 2022;",
"ref_id": "BIBREF6"
},
{
"start": 922,
"end": 941,
"text": "Asai et al., 2021a;",
"ref_id": "BIBREF4"
},
{
"start": 942,
"end": 961,
"text": "Joshi et al., 2020)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Formulation",
"sec_num": "2.1"
},
{
"text": "Systems are evaluated using automatic metrics: token-level F1 and exact match (EM). Although EM is often used as the primary evaluation metric for English, the risk of surface-level mismatching (Min et al., 2020a) can be more pervasive in cross-lingual settings. Therefore, we use F1 as the primary metric and rank systems using the F1 scores. Evaluation is conducted using languagespecific tokenization and evaluation scripts provided in the MIA shared task repository. 4 We use data from XOR-TyDi QA and MKQA (detailed in Section 3), and due to different characteristics these datasets have, we macro-average scores per language set on each dataset, and then macro-average those scores to produce an F1 score for XOR-TyDi QA and an F1 score for MKQA to compute the final scores for ranking.",
"cite_spans": [
{
"start": 194,
"end": 213,
"text": "(Min et al., 2020a)",
"ref_id": null
},
{
"start": 471,
"end": 472,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "2.2"
},
{
"text": "For the shared task, we defined two tracks based on the resource used to train systems: constrained and unconstrained settings. Systems trained only on the official training data qualify for the constrained track, while systems trained with additional data sources participate in the unconstrained track.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Tracks",
"sec_num": "2.3"
},
{
"text": "Constrained Track. To qualify as a constrained track submission, participants are required to use the official training corpus, which consists of examples pooled from XOR-TyDi QA and Natural Questions (Kwiatkowski et al., 2019) . See more data collection details in Section 3. No other QA data may be used for training. We allow participants to use off-the-shelf tools for linguistic annotations (e.g. POS taggers, syntactic parsers), as well as any publicly available unlabeled data and models derived from these (e.g. word vectors, pre-trained language models). In the constrained setup, participants may not use external blackbox APIs such as Google Search API and Google Translate API for inference, as those models are often trained on additional data, but they are permitted to use them for offline data augmentation or training.",
"cite_spans": [
{
"start": 201,
"end": 227,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tracks",
"sec_num": "2.3"
},
{
"text": "Unconstrained track. Any model submissions using APIs or training data beyond the scope of the constrained track are considered for the unconstrained setting. Participants are required to report the details of their additional resources used for training, for transparency. For instance, a submission might use publicly available QA datasets, such as CMRC 2018 (Cui et al., 2019) and FQuAD (d'Hoffschmidt et al., 2020) , to create larger-scale training data.",
"cite_spans": [
{
"start": 361,
"end": 379,
"text": "(Cui et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 384,
"end": 418,
"text": "FQuAD (d'Hoffschmidt et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Tracks",
"sec_num": "2.3"
},
{
"text": "The MIA shared task data is derived from two large-scale multilingual evaluation sets: XOR-TyDi QA (Asai et al., 2021a) and MKQA (Longpre et al., 2021) . We first discuss the source datasets, and then discuss how the target languages are selected, and how the data is split into training and evaluation sets. (Kwiatkowski et al., 2019) and the answers are re-annotated for higher qualitychosen independently of any web pages or document corpora. From MKQA, we sample the 6,758 parallel examples which are answerable. We select 12 of the 26 languages to lower the computational barrier: Arabic (ar), English (en), Spanish (es), Finnish (fi), Japanese (ja), Khmer (km), Korean (ko), Malay (ms), Russian (ru), Swedish (sv), Turk-ish (tr), and traditional Chinese (zh-cn).",
"cite_spans": [
{
"start": 99,
"end": 119,
"text": "(Asai et al., 2021a)",
"ref_id": "BIBREF4"
},
{
"start": 129,
"end": 151,
"text": "(Longpre et al., 2021)",
"ref_id": "BIBREF33"
},
{
"start": 309,
"end": 335,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task Data",
"sec_num": "3"
},
{
"text": "We select a subset of languages from each resource (i) to cover a wide range of languages and typological features with a sufficient scale, and (ii) to compare participating model performance between questions that are translated from English and ones that are naturally generated by native speakers. The natively-written questions from XOR-TyDi QA allow measuring systems' quality on questions that are likely to serve information need expressed by speakers of each language, whereas the humantranslated questions of MKQA allow measuring the performance on the target script and language, holding constant the question content. For this reason, we include 5 languages present in both XOR-TyDi QA and MKQA to compare the gap between cultural and linguistic model generalization: Arabic, Finnish, Japanese, Korean, and Russian.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Language Selection",
"sec_num": "3.2"
},
{
"text": "Surprise languages. In addition, we newly annotated data in Tagalog (tl) and Tamil (ta), where little work studies open-retrieval QA . For each language, we sample 350 MKQA English examples, where the answer entities have an Wikipedia article in the target language. The 350 questions are all translated using Gengo's human translation, 5 but the answers are automatically translated using Wikidata. This annotation results in 350 well-formed examples in Tagalog (tl) and Tamil (ta). Surprise languages are released two weeks before the system submission deadline to test systems' ability to perform zero-shot transfer (Hu et al., 2020) to unseen languages that are substantially different from the languages they are trained on. Except for one system, all of the submissions directly apply their systems to the new languages without any training or adding new target languages' Wikipedia. Table 1 presents the list of the languages and statistics of the train, development and test set data in each target language.",
"cite_spans": [
{
"start": 619,
"end": 636,
"text": "(Hu et al., 2020)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 890,
"end": 897,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Language Selection",
"sec_num": "3.2"
},
{
"text": "Training data. Our training data consists of Natural Questions (Kwiatkowski et al., 2019) for English and XOR-TyDi QA for the other languages in the shared task. 6 In the constrained track (Section 2.3) only this data source is permitted for providing QA supervision, though other tools are permissible for data augmentation.",
"cite_spans": [
{
"start": 63,
"end": 89,
"text": "(Kwiatkowski et al., 2019)",
"ref_id": "BIBREF27"
},
{
"start": 162,
"end": 163,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": "3.3"
},
{
"text": "Evaluation data. Our evaluation sets span 16 languages: 7 from XOR-TyDi QA and 12 from MKQA with an overlap of five languages and two surprise languages newly annotated for this shared task following MKQA annotation schema. We found that the original XOR-TyDi QA validation and test splits have different proportions of the inlanguage and cross-lingual questions, resulting in large performance gaps between dev and test subsets as reported by Asai et al. (2021b) . We re-split XOR-TyDi QA so that the validation and test sets have similar ratios of the two question types of inlanguage and cross-lingual questions. In-language questions are answerable from Wikipedia in the question's language, and are often easier to answer while the other category requires cross-lingual retrieval between the target language and English, and are more challenging. Further, we add aliases that can be retrieved via the Wikimedia API to the gold answers, following MKQA, thereby avoiding penalizing models for generating correct answers with surface-level differences. For MKQA we split the answerable examples into a validation set of 1,758 questions and a test set of 5,000 question. We add the newly annotated data for the surprise languages (Tamil and Tagalog) to the test set only.",
"cite_spans": [
{
"start": 444,
"end": 463,
"text": "Asai et al. (2021b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Statistics",
"sec_num": "3.3"
},
{
"text": "False negatives in evaluations. First, because the original source questions and answers are from TyDi QA or Natural Questions, their answers are annotated based on a single Wikipedia article in English or the question language. MKQA answers are re-labeled by English speakers without any Wikipedia or web corpus, but small portion of the answers can be geographically incorrect for that regions of the languages the data is translated into (e.g., when the first harry potter movie was released?). As we generalize the task setting to cross-lingual open retrieval, there are inconsistent contents across articles in different languages leading to many possible answers. However, because we only have one answer, this can penalize correct answers (Palta et al., 2022) . It is a common issue that open-retrieval QA datasets do not comprehensively cover all valid answers (Min et al., 2020a; Asai and Choi, 2021) , and this can be more prevalent in multilingual settings due to transliteration of entities or diverse ways to express numeric in some languages (Al-Onaizan and Knight, 2002) .",
"cite_spans": [
{
"start": 746,
"end": 766,
"text": "(Palta et al., 2022)",
"ref_id": "BIBREF37"
},
{
"start": 869,
"end": 888,
"text": "(Min et al., 2020a;",
"ref_id": null
},
{
"start": 889,
"end": 909,
"text": "Asai and Choi, 2021)",
"ref_id": "BIBREF2"
},
{
"start": 1056,
"end": 1085,
"text": "(Al-Onaizan and Knight, 2002)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "3.4"
},
{
"text": "English American-centric biases. Second, the MKQA questions as well as the new data annotated for this shared task are translated from English. This annotation scheme enables us to scale up to many typologically diverse languages, but the resulting questions are likely to be Western-or specifically American-centric, rather than reflecting native speakers' interests and unique linguistic phenomena (Clark et al., 2020) . We try to reduce such English-centric bias by only using the questions whose answer entities are also included in Tamil or Tagalog Wikipedia, though this constrains the distribution to simple factoid questions. We also found that in some languages, MKQA answers have high overlap with their English counterparts.",
"cite_spans": [
{
"start": 400,
"end": 420,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limitations",
"sec_num": "3.4"
},
{
"text": "We use a state-of-the-art open-retrieval QA model as our baseline. We open source the code, trained checkpoints, training data, and intermediate/final prediction results. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Models",
"sec_num": "4"
},
{
"text": "Our baseline model is based on CORA (Asai et al., 2021b) , which has two components: mDPR for document retrieval and mGEN for answer generation. Both mDPR and mGEN are based on multilingual pretrained models to process data written in many different languages without relying on external translation modules. Given a question q L written in a language L, mDPR R retrieves top N passages:",
"cite_spans": [
{
"start": 36,
"end": 56,
"text": "(Asai et al., 2021b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "4.1"
},
{
"text": "P = p 1 , . . . , p N = R(q L )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "4.1"
},
{
"text": ". mDPR includes all of the target languages' Wikipedias as its retrieval target, except for the two surprise languages. mGEN G takes as input q and P and generates an answer a L in the target language: a L = G(q, P). mDPR is a multilingual extension of DPR (Karpukhin et al., 2020) , which employs a dual-encoder architecture based on BERT and retrieves top passages based on the dot-product similarities between encoded representations. During training, mDPR optimizes the loss function as the negative log likelihood of the positive passages. mGEN simply concatenates the question and a set of top K passages, and the fine-tuned multilingual encoderdecoder model generates a final answer in the target language. Unlike some prior work in English conducting end-to-end training of the retriever and reader (Lewis et al., 2020c; Guu et al., 2020) , we train mDPR and mGEN independently. Note that during mGEN training, we use the passages retrieved by the trained mDPR, as in Izacard and Grave (2021a).",
"cite_spans": [
{
"start": 257,
"end": 281,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 807,
"end": 828,
"text": "(Lewis et al., 2020c;",
"ref_id": null
},
{
"start": 829,
"end": 846,
"text": "Guu et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Modeling",
"sec_num": "4.1"
},
{
"text": "We use the official training data for training. We also leverage the long answer annotations in the Natural Questions dataset and the gold paragraph annotations of XOR-TyDi QA to create mDPR training data, released at the shared task repository.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Hyperparameters",
"sec_num": "4.2"
},
{
"text": "8 After training mDPR, we run it on the shared task training data questions to obtain top passages, and then use those retrieved passages to train the mGEN model: mGEN is trained to generate the gold answer given an input query and top retrieved passages. mDPR uses multilingual BERT-base uncased , and mGEN is fine-tuned from mT5-base (Xue et al., 2021) . For mDPR, we use the same hyperparameters as in DPR (Karpukhin et al., 2020) , and train it for 30 epochs, and take the last checkpoint. For mGEN, we follow Asai et al. (2021b) hyperparameters.",
"cite_spans": [
{
"start": 336,
"end": 354,
"text": "(Xue et al., 2021)",
"ref_id": null
},
{
"start": 409,
"end": 433,
"text": "(Karpukhin et al., 2020)",
"ref_id": "BIBREF26"
},
{
"start": 514,
"end": 533,
"text": "Asai et al. (2021b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Hyperparameters",
"sec_num": "4.2"
},
{
"text": "8 https://github.com/mia-workshop/ MIA-Shared-Task-2022#training-data 4.3 Pre-processing Knowledge Corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training and Hyperparameters",
"sec_num": "4.2"
},
{
"text": "Following DPR and mDPR, we split each article into 100-token chunks based on whitespace. For non-spacing languages (e.g., Japanese, Thai), we tokenize the articles using off-the-shelf tokenizers (i.e., MeCab for Japanese 9 and Thai NLP for Thai 10 ). We exclude passages with less than 20 tokens. Total numbers of passages for each language are listed in Table 1 .",
"cite_spans": [],
"ref_spans": [
{
"start": 355,
"end": 362,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Training and Hyperparameters",
"sec_num": "4.2"
},
{
"text": "Four teams submitted their final systems to our EvalAI (Yadav et al., 2019) leaderboard, 11 three of which significantly outperformed the original baseline described in Section 4. We summarize the submitted systems here and refer readers to their system description paper for details.",
"cite_spans": [
{
"start": 55,
"end": 75,
"text": "(Yadav et al., 2019)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Shared Task Submissions",
"sec_num": "5"
},
{
"text": "mLUKE+FiD. Tu and Padmanabhan (2022) adapt the retrieve-then-read baseline system with several improvements, including (a) using an mLUKE encoder (Ri et al., 2022) for dense retrieval, (b) combining sparse and dense retrieval, (c) using a fusion-in-decoder reader (Izacard and Grave, 2021b) , and (d) leveraging Wikipedia links to augment the training data with additional target language labels. For retrieval, Tu and Padmanabhan (2022) use the 2019/02/01 Wikipedia snapshot as their document corpora, matching the baseline. They include the Wikipedia snapshots for Tamil and Tagalog to evaluate on the surprise languages. Their sparse retriever searches the monolingual corpora only, while their dense retriever searches all corpora.",
"cite_spans": [
{
"start": 11,
"end": 36,
"text": "Tu and Padmanabhan (2022)",
"ref_id": "BIBREF43"
},
{
"start": 146,
"end": 163,
"text": "(Ri et al., 2022)",
"ref_id": null
},
{
"start": 264,
"end": 290,
"text": "(Izacard and Grave, 2021b)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Systems",
"sec_num": "5.1"
},
{
"text": "CMUmQA. Agarwal et al. (2022) build a fourstage pipeline for a retrieve-then-read approach, based on the CORA open-retrieval system (Asai et al., 2021b ) that searches evidence documents in any language for target questions (many-to-many QA; Asai et al., 2021b) , without relying on translation. They first apply an mBERT-based DPR retrieval model, followed by a reranker (Qu et al., 2021) with XLM-RoBERTA (Conneau et al., 2020 retrieval, the reranker has the advantage of encoding a question and a passage together, rather than independently. An mT5-based fusion-in-decoder is then applied to generate an answer. As the final step of their pipeline, Wikidata is used to translate English entities in the answer into the target language, if any.",
"cite_spans": [
{
"start": 132,
"end": 151,
"text": "(Asai et al., 2021b",
"ref_id": "BIBREF5"
},
{
"start": 242,
"end": 261,
"text": "Asai et al., 2021b)",
"ref_id": "BIBREF5"
},
{
"start": 372,
"end": 389,
"text": "(Qu et al., 2021)",
"ref_id": "BIBREF38"
},
{
"start": 407,
"end": 428,
"text": "(Conneau et al., 2020",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Systems",
"sec_num": "5.1"
},
{
"text": "ZusammenQA. Hung et al. (2022) follow the retrieve-then-read system, but with the expansion of several components, along with training methods and data augmentation. Their retriever ensembles supervised models (mDPR and mDPR with a MixCSE loss; Wang et al., 2022) along with unsupervised sparse (Oracle BM-25) and unsupervised dense models (DISTIL, LaBSE, MiniLM, MPNet).",
"cite_spans": [
{
"start": 12,
"end": 30,
"text": "Hung et al. (2022)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Systems",
"sec_num": "5.1"
},
{
"text": "The reader system is based on mGEN, but with domain adaptation by continued masked language modeling on the document corpora, to better adapt to Wikipedia and the target languages. The training data is augmented using Dugan et al. (2022) that generates question-answer pairs from raw document corpora and translates them into multiple languages.",
"cite_spans": [
{
"start": 218,
"end": 237,
"text": "Dugan et al. (2022)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Constrained Systems",
"sec_num": "5.1"
},
{
"text": "Texttron. This unconstrained submission also follows the retrieve-then-read structure: the retrieval model performs dense passage retrieval with XLM-RoBERTa Large (Conneau et al., 2020) , and the reading model uses mt5 large. The retrieval text is split into paragraphs (as opposed to 100word text segments) extracted by the WikiExtractor package. The retrieval model is trained on a combination of three types of custom training data: target-to-target (both the query and retrieved paragraphs are in the target language), target-to-English (the query is in the target language and the retrieval paragraphs are in English), and English-to-English (both the query and retrieved paragraphs are in English). These data are created based on BM25 retrieval and query translation.",
"cite_spans": [
{
"start": 163,
"end": 185,
"text": "(Conneau et al., 2020)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Unconstrained Systems",
"sec_num": "5.2"
},
{
"text": "Texttron also used multiple stages of training and negative sample mining to tune their final dense retriever with hard negatives: a combination of BM25 and examples from the previous iteration of retrieval that had low token overlap with the gold answers. No system description was available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Unconstrained Systems",
"sec_num": "5.2"
},
{
"text": "Tables 2 and 3 show final results on XOR-TyDi QA and MKQA subsets, respectively. Three systems are submitted in the constrained setting, while Texttron is an unconstrained submission.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "6"
},
{
"text": "Macro performance. Texttron, mLUKE + mFiD, and CMUmQA significantly improve the baseline performance. Among the constraint submissions, mLUKE + mFiD yields the best performance. While several systems achieve higher than 40 average F1 on XOR-TyDi QA, only two systems achieve higher than 20 average F1 on MKQA, demonstrating how difficult it is to build a system that performs well in many languages without language-specific supervision. Texttron significantly outperforms other baselines on XOR-TyDi QA while CMUmQA shows the best MKQA performance among the submitted systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "6"
},
{
"text": "Language-wise performance. The performance varies across different languages. Among XOR-TyDi QA, all of the systems struggle in Korean and Bengali, while in Arabic, Japanese and Russian, they generally show relatively high F1 scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "6"
},
{
"text": "On MKQA, where all of the questions are parallel, the performance still significantly differs across languages. Almost all of the systems report lower than 10 F1 in Khmer and Tamil, which are less represented in existing pretraining corpora (Xue et al., 2021) and use their own script systems-with the notable exception of mLUKE + FiD, which achieves 20.8 F1 on Tamil. mLUKE+FiD achieves substantially better performance than other systems in Tamil. This is partially because they also include the Tamil Wikipedia passages for passage retrieval, while other systems, including the baseline, do not. As discussed in Asai et al. (2021b) , all systems show lower scores in the languages that are distant from English and use non-Latin scripts (e.g., Cyrillic for Russian, Hangul for Korean).",
"cite_spans": [
{
"start": 241,
"end": 259,
"text": "(Xue et al., 2021)",
"ref_id": null
},
{
"start": 615,
"end": 634,
"text": "Asai et al. (2021b)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Main Results",
"sec_num": "6"
},
{
"text": "We provide further analysis on the submitted systems. In Section 7.1 we provide a brief summary of the findings from the submitted system descriptions. Section 7.2 provides performance comparison over answer-type, and answer overlap with English or training data. We then analyze the degree of answer agreements among the submitted systems to understand which questions remain challenging in Section 7.3. We further conduct manual error analysis in five languages in Section 7.4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "7"
},
{
"text": "In this section, we highlight several effective techniques from the submitted systems. Overall, a surprisingly wide range of complementary, and potentially additive, methods all reported strong benefits, including: (i) larger and longer pre-trained models for retrieving and reading, (ii) a reranking step with fusion-in-decoder multi-passage crossencodings, (iii) iterative dense retrieval tuning with progressively harder negative example mining, (iv) using entity-aware retrieval encodings, (v) combining dense and sparse retrievers, (vi) data augmentation, and (vii) leveraging Wikidata answer post-processing for language localization. We discuss some of these below.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Findings",
"sec_num": "7.1"
},
{
"text": "These findings highlight various techniques migrating the performances in English retrieval systems. And most of all, they emphasize that crosslingual retrieval still poses the major bottleneck to the end-to-end task, while large multilingual fusion-in-decoder reader systems can operate well when given sufficient evidence. These findings suggest multilingual retrieval is the most important avenue for future research, especially on questions not easily answered by English Wikipedia. Moreover, retrieving evidence cross-lingually is keys for other knowledge intensive NLP tasks such as fact verification (Thorne et al., 2018) and knowledgegrounded dialogues (Dinan et al., 2019) beyond open-retrieval QA.",
"cite_spans": [
{
"start": 607,
"end": 628,
"text": "(Thorne et al., 2018)",
"ref_id": "BIBREF42"
},
{
"start": 661,
"end": 681,
"text": "(Dinan et al., 2019)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Findings",
"sec_num": "7.1"
},
{
"text": "Entity representations. Using entity-aware representations for the passage retriever's encoders gives a large performance improvement; As shown in analysis by Team Utah (Tu and Padmanabhan, 2022) , replacing mBERT encoders in DPR with mLUKE improves by 1.22 F1 on XOR macroaverage and 1.85 MKQA macro F1. We hypothesize that the mLUKE may capture better crosslingual entity alignment than mBERT as it leverages inter-language links in Wikipedia during pretraining. This sheds light on the potential effectiveness of multilingual entity contextualized representations for cross-lingual passage representations, which is an under-explored direction.",
"cite_spans": [
{
"start": 169,
"end": 195,
"text": "(Tu and Padmanabhan, 2022)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Findings",
"sec_num": "7.1"
},
{
"text": "Combining dense and sparse retrievers & hard negatives. Texttron and Team Utah combine both BM25 and mDPR, while ZusammenQA explore a diverse set of unsupervised and supervised retrieval approaches including BM25 and LaBSE (Feng et al., 2022) . Team Utah shows that combining BM25 with mDPR helps, while ZusammenQA shows that only using BM25 gives significantly lower scores than the original baseline (Hung et al., 2022) , as BM25 does not have cross-lingual phrase matching capabilities. Texttron iteratively trained their dense retriever, mining increasingly hard negative examples using BM25 and query translation, filtered using simple heuristics.",
"cite_spans": [
{
"start": 223,
"end": 242,
"text": "(Feng et al., 2022)",
"ref_id": "BIBREF17"
},
{
"start": 402,
"end": 421,
"text": "(Hung et al., 2022)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Findings",
"sec_num": "7.1"
},
{
"text": "Fusion-in-Decoder and passage reranking. Team Utah and CMUmQA demonstrate that Fusion-in-Decoder architectures outperform simply concatenating passages as in mGEN (Fusionin-Encoder). While Fusion-in-Encoder simply concatenates retrieved passages in a retrieved order, Fusion-in-Decoder encodes each of the retrieved passages independently and then concatenate them. This may help the model to pay more attentions to the passages that are ranked lower by the retriever but indeed provides evidence to answer. Recent work in open domain QA also demonstrates that the Fusion-in-Decoder architecture is more competitive than prior systems that simply concatenate passages (Fajcik et al., 2021; Asai et al., 2022) .",
"cite_spans": [
{
"start": 668,
"end": 689,
"text": "(Fajcik et al., 2021;",
"ref_id": "BIBREF16"
},
{
"start": 690,
"end": 708,
"text": "Asai et al., 2022)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Findings",
"sec_num": "7.1"
},
{
"text": "Team Utah show increasing the number of passages improves performance, while CMUmQA show that cross-encoder reranking is particularly beneficial for Fusion-in-Decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Findings",
"sec_num": "7.1"
},
{
"text": "Data augmentation. ZusammenQA introduces data augmentation using Google Translate to translate the training data into target languages. AUG-QA translates question-answer pairs into target languages, while AUG-QAP translates question, answer and the original training data passages into the target languages. They found that the AUG-QAP and AUG-QA both improve performance from their direct counterpart without data augmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Findings",
"sec_num": "7.1"
},
{
"text": "Wikipedia answer localization. CMUmQA and others used Wikidata entity maps to localize answers to the correct target script following Longpre et al. (2021) . This process was particularly effective for localizing short answers into a target language from English due to the overwhelming English bias of retrieval and generative systems finetuned on English. As a result, CMUmQA obtains the best MKQA performance among the submitted systems.",
"cite_spans": [
{
"start": 134,
"end": 155,
"text": "Longpre et al. (2021)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Summary of Findings",
"sec_num": "7.1"
},
{
"text": "In this section, we group questions based on several factors (e.g., answer types) and compare the models' performance across different sub-groups.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "7.2"
},
{
"text": "Answer types. MKQA provides answer categories for each question. We analyze the percategory model performance to understand what types of questions remain challenging. The original MKQA source data except for the unanswerable subsets has the following answer type distributions: Entity (42%), Date (12%), Number (5%), Number with Unit (4%), Short Phrase (3%), Boolean (yes, no; 1%), Unanswerable (14%), and Long Answers Table 4 : The percentage of the exact match per answer types in English (en), Spanish (es), Japanese (ja) and Chinese (zh).",
"cite_spans": [],
"ref_spans": [
{
"start": 420,
"end": 427,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "7.2"
},
{
"text": "(13%). The Unanswerable and Long Answers categories are excluded from the MIA 2022 shared task evaluation data. We present the percentage of the questions where any of the submitted system predictions match the annotated gold answers in English, Spanish, Japanese and Chinese in Table 4 . In all of the languages, the systems show relatively higher exact matching rate in Entity types questions except for Chinese and Japanese. In those languages, many of the entity names are written in their own script systems (e.g., Chinese characters, katakana), which is challenging to be generated from the evidence passages written in other languages; it is known to be challenging to translate an entity name from one language to another using different script systems (Wang et al., 2017) . In English and Spanish, the systems show significantly higher accuracy on entity and date than in Japanese or Chinese, while the systems struggle in Boolean questions. XOR-TyDi QA Japanese subset shows higher percentage of boolean questions than other subsets, which potentially helps the systems in Japanese and Chinese MKQA boolean questions. All of the systems show significantly lower performance in short phrase questions, indicating the difficulty of generating phrase length answers beyond simple factoid questions with entity or date answers.",
"cite_spans": [
{
"start": 761,
"end": 780,
"text": "(Wang et al., 2017)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [
{
"start": 279,
"end": 286,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "7.2"
},
{
"text": "Answer overlaps with English. We analyze performances across languages by examining the relationship between the final performance and the number of the questions whose answers are the same as English answers. Figure 1a shows the performance of the best constrained track submission, mLUKE + FiD and answer overlap with the English subsets for each MKQA language except for Khmer and two surprise languages. We observe a clear correlation between the answer overlap and final performance among those languages. The model performs well on the languages where many answers are the same as English answers. Finnish, on the other hand, shows relatively lower performance compared to other languages with high answer overlap (i.e., Malay, Swedish, Spanish). Among the languages with low answer overlap, on the Japanese and Chinese sets, the system shows relatively high F1 scores compared to the other languages with lower than 40% overlap (i.e., Russian, Korean, Arabic). This is likely because Chinese and Japanese show higher accuracy on Boolean type questions than other languages as discussed above.",
"cite_spans": [],
"ref_spans": [
{
"start": 210,
"end": 219,
"text": "Figure 1a",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "7.2"
},
{
"text": "Answer overlap with training data. Prior work shows that the high overlap between train and test data can result in the overestimated performance of the systems (Lewis et al., 2021) . In XOR-TyDi QA, the questions are annotated by native speakers of the target languages, so the percentage of the train-test overlap can vary across languages. We calculate the percentage of the answers for the test data questions that also appear as gold answers in XOR-TyDi QA training data. We then check whether the degree of the answer overlap between the train and test sets correlate with the final XOR-TyDi QA test performance. Figure 1b shows the performance and train-test overlap percentage. Although we can see the percentage of overlap between train and test data varies across languages, it is not particularly correlated with the final performance. For instance, Bengali actually shows relatively high overlap between train and test data (over 25% answer overlap), but the performance is much lower than Telugu, whose answer overlap ratio is close to that of Bengali. We also found that the percentage of the Boolean questions (yes, no) significantly differs across languages: in Japanese, around 10% of the questions are Boolean questions, while in Telugu, almost no questions are Boolean. The original TyDi QA data is annotated by different groups of annotators for each language, and thus such question distributions can differ (Clark et al., 2020) .",
"cite_spans": [
{
"start": 161,
"end": 181,
"text": "(Lewis et al., 2021)",
"ref_id": "BIBREF31"
},
{
"start": 1429,
"end": 1449,
"text": "(Clark et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 619,
"end": 628,
"text": "Figure 1b",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "7.2"
},
{
"text": "XOR-TyDi QA vs. MKQA. Arabic, Japanese, Korean, and Finnish are included both in MKQA and XOR-TyDi QA, but their performance on the two subsets significantly differ; In general, the XOR-TyDi QA F1 scores are much higher than MKQA (e.g., Japanese: 44.71 vs. 23.11). We hypothesize that this happens because we do not have training data for MKQA and all MKQA questions tend to require cross-lingual retrieval as the questions are translated from English and answers are American-centric. In contrast, half of the questions in XOR-TyDi QA are from TyDi QA, and the answers are grounded to their own languages' Wikipedia. Cross-lingual retrieval is generally more challenging than monolingual retrieval . In addition, all of the XOR-TyDi QA cross-lingual questions are labeled \"unanswerable\" in TyDi QA, and can be more difficult to answer than its monolingual counterparts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "7.2"
},
{
"text": "To further test this hypothesis, we evaluate the submitted systems' performance on XOR-TyDi QA's cross-lingual and monolingual subsets in Table 5. We can clearly see that all of the baseline's performance deteriorates on the cross-lingual subsets, while they show high F1 scores across languages on the monolingual subsets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance Comparison",
"sec_num": "7.2"
},
{
"text": "We analyze how often all of the systems agree on the same answers on the MKQA test data in five languages. In particular, we compare all of the four system predictions on the English, Japanese, Chinese, Spanish and Turkish subsets of the MKQA test data, and check the prediction agreements based on the number of the unique predictions among the union of the predictions. We can see that in English and Spanish, the agreement is high (e.g., in 40% of the questions, all or three of the four systems agree on the same answers), while the agreement is lower in other languages, particularly in Japanese and Chinese. To understand the phenomena, we breakdown the prediction agreement statistics in English and Japanese into different answer categories. Figure 2b and Figure 2c show per-category prediction agreements in English and Japanese, respectively. While in English, systems show high agreements in date, entity and number type questions, in Japanese, the agreement rate is lower across category, potentially because of their diverse formats of number and dates, as well as the transliteration of the entity names.",
"cite_spans": [],
"ref_spans": [
{
"start": 750,
"end": 773,
"text": "Figure 2b and Figure 2c",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Prediction Agreement",
"sec_num": "7.3"
},
{
"text": "We conduct a set of error analysis in five languages (i.e., English, Japanese, Korean, Chinese and Telugu) on randomly sampled 30 questions, where none of the submission systems' predictions exactly match any of the ground truth answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7.4"
},
{
"text": "Error types. We classify the errors into following categories: (i) incorrect predictions, (ii) answers are semantically correct in different languages (incorrect languages), (iii) incorrect gold answers, (iv) semantically-equivalent predictions in the target language but are penalized because gold answers do not cover all of the potential gold answers (not comprehensive gold answers), (v) ques-tions are open-ended or ambiguous (e.g., entity ambiguity), (vi) questions' granularity is unclear (unclear question granularity; e.g., year v.s. month, kilometers v.s. meters), (vii) questions are highly subjective (e.g., who is the best singer ever), (viii) temporal or geographical dependency in questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7.4"
},
{
"text": "The first two error types, (i) and (ii), reveal the limitations of models. The error type (iii) and (iv) are considered answer annotation errors (Min et al., 2020a; Asai and Choi, 2021) . The last four error types (v), (vi), (vii) and (viii) requires some specifications or context (Zhang and Choi, 2021; Min et al., 2020b) .",
"cite_spans": [
{
"start": 145,
"end": 164,
"text": "(Min et al., 2020a;",
"ref_id": null
},
{
"start": 165,
"end": 185,
"text": "Asai and Choi, 2021)",
"ref_id": "BIBREF2"
},
{
"start": 282,
"end": 304,
"text": "(Zhang and Choi, 2021;",
"ref_id": "BIBREF48"
},
{
"start": 305,
"end": 323,
"text": "Min et al., 2020b)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7.4"
},
{
"text": "Error analysis schema. We recruit native speakers of the five target languages and ask them to classify the errors into the aforementioned categories. We present the predictions of all of the systems as well as the intermediate retrieval results of the top constrained system (Team Utah).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7.4"
},
{
"text": "Error analysis results. Table 6 provides the error analysis result. Besides modeling errors, we found that the original annotations themselves exhibit some issues, which underestimates models' performance. Across languages, annotators found non-negligible proportion of the errors happen as the original gold answers do not cover all of the possible answer aliases or the answer granularity is unclear. For instance, an English question asks \"what is the temperature at the center of earth\" and the gold answer is 6000\u00b0C. Several systems answer in Fahrenheit or Kelvin, and got zero F1 score. Several questions are also temporal or geographical de-English Arabic Japanese Korean Chinese (i) incorrect predictions 12 9 23 16 12 (ii) incorrect languages 0 2 3 0 2 (iii) incorrect gold answers 2 4 5 1 0 (iv) not comprehensive gold answers 10 1 7 5 6 (v) ambiguous question 3 7 6 15 5 (vi) unclear question granularity 3 2 1 2 0 (vii) subjective question 0 0 0 0 0 (viii) temporal or geographical dependency in questions 4 4 1 4 5 Table 6 : Error analysis on sampled questions where all of the submissions unanimously fail to predict the correct answers. We show the percentage of the errors in each category. pendent such as \"who was the last person appointed to the u.s. supreme court\" or \u30af\u30ea\u30df\u30ca\u30eb\u30fb\u30de\u30a4 \u30f3\u30c9\u306e\u65b0\u30b7\u30fc\u30ba\u30f3\u304c\u516c\u958b\u3055\u308c\u308b\u306e\u306f\u3044\u3064\u304b (when is the next season of Criminal Minds will be released?). Although situation-grounded QA has been recently studied (Zhang and Choi, 2021 ), there's little work that analyzes this phenomena in multilingual settings, where the particularly geographical dependence can be even more prevalent. Question ambiguity is also common in multilingual QA.",
"cite_spans": [
{
"start": 1437,
"end": 1458,
"text": "(Zhang and Choi, 2021",
"ref_id": "BIBREF48"
}
],
"ref_spans": [
{
"start": 24,
"end": 31,
"text": "Table 6",
"ref_id": null
},
{
"start": 1028,
"end": 1035,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7.4"
},
{
"text": "We have presented the MIA 2022 Shared Task on cross-lingual open-retrieval QA systems in 16 typologically diverse languages, many of which are unseen during training. Several submissions improved significantly over our baseline based on a state-of-the-art cross-lingual open-retrieval QA system and investigated a wide range of techniques. Those results shed light on the effectiveness of several techniques in this challenging task, such as entity-enhanced representations, sparse-dense retrieval, and better interactions between passages. We further conducted detailed performance analysis on different subsets of the datasets, such as languages, answer types, the necessity of crosslingual retrieval as well as detailed error analysis. We also suggest several bottlenecks in the area.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Discussions",
"sec_num": "8"
},
{
"text": "For non-spacing languages (i.e., Japanese, Khmer, and Chinese), we use off-the-shelf tokenizers including Mecab, khmernltk and jieba to tokenize both predictions and groundtruth answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://gengo.com/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "See the training data linked at https://github. com/mia-workshop/MIA-Shared-Task-2022# training-data",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/mia-workshop/ MIA-Shared-Task-2022",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://taku910.github.io/mecab/. 10 https://github.com/PyThaiNLP/ pythainlp.11 https://eval.ai/web/challenges/ challenge-page/1638/leaderboard",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to acknowledgments and Noah A. Smith for serving as our steering committee. We are grateful to Google for providing funding for our workshop. We thank GENGO translators to translate questions into Tamil and Tagalog. we thank the EvalAI team, particularly Ram Ramrakhya, for their help with hosting the shared task submission site. We thank Maraim Masoud for her help in error analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Zero-shot crosslingual open domain question answering",
"authors": [
{
"first": "Sumit",
"middle": [],
"last": "Agarwal",
"suffix": ""
},
{
"first": "Suraj",
"middle": [],
"last": "Tripathi",
"suffix": ""
},
{
"first": "Teruko",
"middle": [],
"last": "Mitamura",
"suffix": ""
},
{
"first": "Carolyn Penstein",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 2022,
"venue": "Proc. of MIA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sumit Agarwal, Suraj Tripathi, Teruko Mitamura, and Carolyn Penstein Rose. 2022. Zero-shot cross- lingual open domain question answering. In Proc. of MIA.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Translating named entities using monolingual and bilingual resources",
"authors": [
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaser Al-Onaizan and Kevin Knight. 2002. Translat- ing named entities using monolingual and bilingual resources. In Proc. of ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Challenges in information-seeking QA: Unanswerable questions and paragraph retrieval",
"authors": [
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akari Asai and Eunsol Choi. 2021. Challenges in information-seeking QA: Unanswerable questions and paragraph retrieval. In Proc. of ACL.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Evidentiality-guided generation for knowledge-intensive nlp tasks",
"authors": [
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Gardner",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2022,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akari Asai, Matt Gardner, and Hannaneh Ha- jishirzi. 2022. Evidentiality-guided generation for knowledge-intensive nlp tasks. In In Proc. of NAACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "XOR QA: Cross-lingual open-retrieval question answering",
"authors": [
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Akari Asai, Jungo Kasai, Jonathan H Clark, Kenton Lee, Eunsol Choi, and Hannaneh Hajishirzi. 2021a. XOR QA: Cross-lingual open-retrieval question answering. In Proc. of NAACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "One question answering model for many languages with cross-lingual dense passage retrieval",
"authors": [
{
"first": "Akari",
"middle": [],
"last": "Asai",
"suffix": ""
},
{
"first": "Xinyan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Jungo",
"middle": [],
"last": "Kasai",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"https://link.springer.com/chapter/10.1007/978-3-642-04447-2_34"
]
},
"num": null,
"urls": [],
"raw_text": "Akari Asai, Xinyan Yu, Jungo Kasai, and Hannaneh Hajishirzi. 2021b. One question answering model for many languages with cross-lingual dense passage retrieval. In Proc. of NeurIPS.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Systematic inequalities in language technology performance across the world's languages",
"authors": [
{
"first": "Damian",
"middle": [],
"last": "Blasi",
"suffix": ""
},
{
"first": "Antonios",
"middle": [],
"last": "Anastasopoulos",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
}
],
"year": 2022,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Damian Blasi, Antonios Anastasopoulos, and Gra- ham Neubig. 2022. Systematic inequalities in lan- guage technology performance across the world's languages. In Proc. of ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Reading Wikipedia to answer opendomain questions",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Fisch",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proc. of ACL.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Open-domain question answering",
"authors": [
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. of ACL: Tutorial Abstracts",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Danqi Chen and Wen-tau Yih. 2020. Open-domain question answering. In Proc. of ACL: Tutorial Ab- stracts.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "TyDi QA: A benchmark for information-seeking question answering in typologically diverse languages",
"authors": [
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Garrette",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Vitaly",
"middle": [],
"last": "Nikolaev",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan H. Clark, Eunsol Choi, Michael Collins, Dan Garrette, Tom Kwiatkowski, Vitaly Nikolaev, and Jennimaria Palomaki. 2020. TyDi QA: A benchmark for information-seeking question answering in typo- logically diverse languages. TACL.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised cross-lingual representation learning at scale",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Kartikay",
"middle": [],
"last": "Khandelwal",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Vishrav",
"middle": [],
"last": "Chaudhary",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Wenzek",
"suffix": ""
},
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzm\u00e1n, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proc. of ACL.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A span-extraction dataset for Chinese machine reading comprehension",
"authors": [
{
"first": "Yiming",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Zhipeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wentao",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Shijin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Guoping",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.560"
]
},
"num": null,
"urls": [],
"raw_text": "Yiming Cui, Ting Liu, Wanxiang Che, Li Xiao, Zhipeng Chen, Wentao Ma, Shijin Wang, and Guoping Hu. 2019. A span-extraction dataset for Chinese machine reading comprehension. In Proc. of EMNLP.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proc. of NAACL.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "FQuAD: French question answering dataset",
"authors": [
{
"first": "Wacim",
"middle": [],
"last": "Martin D'hoffschmidt",
"suffix": ""
},
{
"first": "Quentin",
"middle": [],
"last": "Belblidia",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Heinrich",
"suffix": ""
},
{
"first": "Maxime",
"middle": [],
"last": "Brendl\u00e9",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vidal",
"suffix": ""
}
],
"year": 2020,
"venue": "Findings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin d'Hoffschmidt, Wacim Belblidia, Quentin Heinrich, Tom Brendl\u00e9, and Maxime Vidal. 2020. FQuAD: French question answering dataset. In Find- ings of EMNLP.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Wizard of wikipedia: Knowledge-powered conversational agents",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Kurt",
"middle": [],
"last": "Shuster",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2019. Wizard of wikipedia: Knowledge-powered conversational agents. In Proc. of ICLR.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "A feasibility study of answer-unaware question generation for education",
"authors": [
{
"first": "Liam",
"middle": [],
"last": "Dugan",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Shriyash",
"middle": [],
"last": "Upadhyay",
"suffix": ""
},
{
"first": "Etan",
"middle": [],
"last": "Ginsberg",
"suffix": ""
},
{
"first": "Hannah",
"middle": [],
"last": "Gonzalez",
"suffix": ""
},
{
"first": "Dahyeon",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Chuning",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2022,
"venue": "Findings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liam Dugan, Eleni Miltsakaki, Shriyash Upadhyay, Etan Ginsberg, Hannah Gonzalez, DaHyeon Choi, Chuning Yuan, and Chris Callison-Burch. 2022. A feasibility study of answer-unaware question genera- tion for education. In Findings of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "R2-D2: A modular baseline for opendomain question answering",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Fajcik",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Docekal",
"suffix": ""
},
{
"first": "Karel",
"middle": [],
"last": "Ondrej",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Smrz",
"suffix": ""
}
],
"year": 2021,
"venue": "Findings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Fajcik, Martin Docekal, Karel Ondrej, and Pavel Smrz. 2021. R2-D2: A modular baseline for open- domain question answering. In Findings of EMNLP.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Language-agnostic BERT sentence embedding",
"authors": [
{
"first": "Fangxiaoyu",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Yinfei",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Naveen",
"middle": [],
"last": "Arivazhagan",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2022,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen Ari- vazhagan, and Wei Wang. 2022. Language-agnostic BERT sentence embedding. In Proc. of ACL.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Evaluating multilingual question answering systems at CLEF",
"authors": [
{
"first": "Pamela",
"middle": [],
"last": "Forner",
"suffix": ""
},
{
"first": "Danilo",
"middle": [],
"last": "Giampiccolo",
"suffix": ""
},
{
"first": "Bernardo",
"middle": [],
"last": "Magnini",
"suffix": ""
},
{
"first": "Anselmo",
"middle": [],
"last": "Pe\u00f1as",
"suffix": ""
},
{
"first": "\u00c1lvaro",
"middle": [],
"last": "Rodrigo",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sutcliffe",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pamela Forner, Danilo Giampiccolo, Bernardo Magnini, Anselmo Pe\u00f1as, \u00c1lvaro Rodrigo, and Richard Sut- cliffe. 2010. Evaluating multilingual question an- swering systems at CLEF. In Proc. of LREC.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Overview of the clef 2008 multilingual question answering track",
"authors": [
{
"first": "Pamela",
"middle": [],
"last": "Forner",
"suffix": ""
},
{
"first": "Anselmo",
"middle": [],
"last": "Pe\u00f1as",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "I\u00f1aki",
"middle": [],
"last": "Alegria",
"suffix": ""
},
{
"first": "Corina",
"middle": [],
"last": "For\u0203scu",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Moreau",
"suffix": ""
},
{
"first": "Petya",
"middle": [],
"last": "Osenova",
"suffix": ""
},
{
"first": "Prokopis",
"middle": [],
"last": "Prokopidis",
"suffix": ""
},
{
"first": "Paulo",
"middle": [],
"last": "Rocha",
"suffix": ""
},
{
"first": "Bogdan",
"middle": [],
"last": "Sacaleanu",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Sutcliffe",
"suffix": ""
},
{
"first": "Erik Tjong Kim",
"middle": [],
"last": "Sang",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of CLEF",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pamela Forner, Anselmo Pe\u00f1as, Eneko Agirre, I\u00f1aki Alegria, Corina For\u0203scu, Nicolas Moreau, Petya Osenova, Prokopis Prokopidis, Paulo Rocha, Bogdan Sacaleanu, Richard Sutcliffe, and Erik Tjong Kim Sang. 2008. Overview of the clef 2008 multilingual question answering track. In Proc. of CLEF.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "XTREME: A massively multilingual multitask benchmark for evaluating cross-lingual generalisation",
"authors": [
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. of ICML",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra- ham Neubig, Orhan Firat, and Melvin Johnson. 2020. XTREME: A massively multilingual multi- task benchmark for evaluating cross-lingual generali- sation. In Proc. of ICML.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "ZusammenQA: Data augmentation with specialized models for cross-lingual open-retrieval question answering system",
"authors": [
{
"first": "Chia-Chien",
"middle": [],
"last": "Hung",
"suffix": ""
},
{
"first": "Tommaso",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Litschko",
"suffix": ""
},
{
"first": "Tornike",
"middle": [],
"last": "Tsereteli",
"suffix": ""
},
{
"first": "Sotaro",
"middle": [],
"last": "Takeshita",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Bombieri",
"suffix": ""
},
{
"first": "Goran",
"middle": [],
"last": "Glava\u0161",
"suffix": ""
},
{
"first": "Simone",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2022,
"venue": "Proc. of MIA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chia-Chien Hung, Tommaso Green, Robert Litschko, Tornike Tsereteli, Sotaro Takeshita, Marco Bombieri, Goran Glava\u0161, and Simone Paolo Ponzetto1. 2022. ZusammenQA: Data augmentation with specialized models for cross-lingual open-retrieval question an- swering system. In Proc. of MIA.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Distilling knowledge from reader to retriever for question answering",
"authors": [
{
"first": "Gautier",
"middle": [],
"last": "Izacard",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gautier Izacard and Edouard Grave. 2021a. Distilling knowledge from reader to retriever for question an- swering. In Proc. of ICLR.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Leveraging passage retrieval with generative models for open domain question answering",
"authors": [
{
"first": "Gautier",
"middle": [],
"last": "Izacard",
"suffix": ""
},
{
"first": "Edouard",
"middle": [],
"last": "Grave",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gautier Izacard and Edouard Grave. 2021b. Leveraging passage retrieval with generative models for open domain question answering. In Proc. of EACL.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "The state and fate of linguistic diversity and inclusion in the NLP world",
"authors": [
{
"first": "Pratik",
"middle": [],
"last": "Joshi",
"suffix": ""
},
{
"first": "Sebastin",
"middle": [],
"last": "Santy",
"suffix": ""
},
{
"first": "Amar",
"middle": [],
"last": "Budhiraja",
"suffix": ""
},
{
"first": "Kalika",
"middle": [],
"last": "Bali",
"suffix": ""
},
{
"first": "Monojit",
"middle": [],
"last": "Choudhury",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2020.emnlp-main.466"
]
},
"num": null,
"urls": [],
"raw_text": "Pratik Joshi, Sebastin Santy, Amar Budhiraja, Kalika Bali, and Monojit Choudhury. 2020. The state and fate of linguistic diversity and inclusion in the NLP world. In Proc. of ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Dense passage retrieval for opendomain question answering",
"authors": [
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ledell",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open- domain question answering. In Proc. of EMNLP.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Natural questions: a benchmark for question answering research. TACL",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Jennimaria",
"middle": [],
"last": "Palomaki",
"suffix": ""
},
{
"first": "Olivia",
"middle": [],
"last": "Redfield",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Alberti",
"suffix": ""
},
{
"first": "Danielle",
"middle": [],
"last": "Epstein",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kwiatkowski, Jennimaria Palomaki, Olivia Red- field, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Ken- ton Lee, et al. 2019. Natural questions: a benchmark for question answering research. TACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "MLQA: Evaluating cross-lingual extractive question answering",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Ruty",
"middle": [],
"last": "Rinott",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Barlas Oguz, Ruty Rinott, Sebastian Riedel, and Holger Schwenk. 2020a. MLQA: Evalu- ating cross-lingual extractive question answering. In Proc. of ACL.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Retrieval-augmented generation for knowledgeintensive nlp tasks",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"https://asistdl.onlinelibrary.wiley.com/doi/full/10.1002/asi.24553"
]
},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rock- t\u00e4schel, Sebastian Riedel, and Douwe Kiela. 2020b. Retrieval-augmented generation for knowledge- intensive nlp tasks. In Proc. of NeurIPS.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Retrieval-augmented generation for knowledge-intensive NLP tasks",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Ethan",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Aleksandra",
"middle": [],
"last": "Piktus",
"suffix": ""
},
{
"first": "Fabio",
"middle": [],
"last": "Petroni",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Heinrich",
"middle": [],
"last": "K\u00fcttler",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of NeurIPS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich K\u00fcttler, Mike Lewis, Wen-tau Yih, Tim Rock- t\u00e4schel, et al. 2020c. Retrieval-augmented genera- tion for knowledge-intensive NLP tasks. In Proc. of NeurIPS.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Question and answer test-train overlap in opendomain question answering datasets",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Pontus",
"middle": [],
"last": "Stenetorp",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of EACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open- domain question answering datasets. In Proc. of EACL.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "XQA: A cross-lingual open-domain question answering dataset",
"authors": [
{
"first": "Jiahua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yankai",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Zhiyuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2019,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiahua Liu, Yankai Lin, Zhiyuan Liu, and Maosong Sun. 2019. XQA: A cross-lingual open-domain question answering dataset. In Proc. of ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "MKQA: A linguistically diverse benchmark for multilingual open domain question answering",
"authors": [
{
"first": "Shayne",
"middle": [],
"last": "Longpre",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Joachim",
"middle": [],
"last": "Daiber",
"suffix": ""
}
],
"year": 2021,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shayne Longpre, Yi Lu, and Joachim Daiber. 2021. MKQA: A linguistically diverse benchmark for mul- tilingual open domain question answering. TACL.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Sonal Gupta, Yashar Mehdad, and Wen-tau Yih. 2020a. NeurIPS 2020 efficientQA competition: Systems, analyses and lessons learned",
"authors": [
{
"first": "Shun",
"middle": [],
"last": "Miyawaki",
"suffix": ""
},
{
"first": "Ryo",
"middle": [],
"last": "Sato",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Takahashi",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Fajcik",
"suffix": ""
},
{
"first": "Karel",
"middle": [],
"last": "Docekal",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Ondrej",
"suffix": ""
},
{
"first": "Hao",
"middle": [],
"last": "Smrz",
"suffix": ""
},
{
"first": "Yelong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Pengcheng",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Weizhu",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Barlas",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Xilun",
"middle": [],
"last": "Oguz",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Stan",
"middle": [],
"last": "Karpukhin",
"suffix": ""
},
{
"first": "Dmytro",
"middle": [],
"last": "Peshterliev",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Okhonko",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Schlichtkrull",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of NeurIPS 2020 Competition and Demonstration Track",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/2021.emnlp-main.586"
]
},
"num": null,
"urls": [],
"raw_text": "Miyawaki, Shun Sato, Ryo Takahashi, Jun Suzuki, Martin Fajcik, Martin Docekal, Karel Ondrej, Pavel Smrz, Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, Jianfeng Gao, Bar- las Oguz, Xilun Chen, Vladimir Karpukhin, Stan Peshterliev, Dmytro Okhonko, Michael Schlichtkrull, Sonal Gupta, Yashar Mehdad, and Wen-tau Yih. 2020a. NeurIPS 2020 efficientQA competition: Systems, analyses and lessons learned. In Proc. of NeurIPS 2020 Competition and Demonstration Track.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "AmbigQA: Answering ambiguous open-domain questions",
"authors": [
{
"first": "Sewon",
"middle": [],
"last": "Min",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Michael",
"suffix": ""
},
{
"first": "Hannaneh",
"middle": [],
"last": "Hajishirzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2020,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sewon Min, Julian Michael, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2020b. AmbigQA: Answering ambiguous open-domain questions. In Proc. of EMNLP.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Investigating information inconsistency in multilingual open-domain question answering",
"authors": [
{
"first": "Shramay",
"middle": [],
"last": "Palta",
"suffix": ""
},
{
"first": "Haozhe",
"middle": [],
"last": "An",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Shuaiyi",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Maharshi",
"middle": [],
"last": "Gor",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shramay Palta, Haozhe An, Yifan Yang, Shuaiyi Huang, and Maharshi Gor. 2022. Investigating information inconsistency in multilingual open-domain question answering.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "RocketQA: An optimized training approach to dense passage retrieval for opendomain question answering",
"authors": [
{
"first": "Yingqi",
"middle": [],
"last": "Qu",
"suffix": ""
},
{
"first": "Yuchen",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ruiyang",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Wayne",
"middle": [
"Xin"
],
"last": "Zhao",
"suffix": ""
},
{
"first": "Daxiang",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yingqi Qu, Yuchen Ding, Jing Liu, Kai Liu, Ruiyang Ren, Wayne Xin Zhao, Daxiang Dong, Hua Wu, and Haifeng Wang. 2021. RocketQA: An optimized train- ing approach to dense passage retrieval for open- domain question answering. In Proc. of NAACL.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "2022. mLUKE: The power of entity representations in multilingual pretrained language models",
"authors": [
{
"first": "Ryokan",
"middle": [],
"last": "Ri",
"suffix": ""
},
{
"first": "Ikuya",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ryokan Ri, Ikuya Yamada, and Yoshimasa Tsuruoka. 2022. mLUKE: The power of entity representations in multilingual pretrained language models. In Proc. of ACL.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Information asymmetry in wikipedia across different languages: A statistical analysis",
"authors": [
{
"first": "Dwaipayan",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Bhatia",
"suffix": ""
},
{
"first": "Prateek",
"middle": [],
"last": "Jain",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dwaipayan Roy, Sumit Bhatia, and Prateek Jain. 2022. Information asymmetry in wikipedia across different languages: A statistical analysis. JASIST.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Crosslanguage information retrieval (CLIR) track overview",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Sch\u00e4uble",
"suffix": ""
},
{
"first": "P\u00e1raic",
"middle": [],
"last": "Sheridan",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of TREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Sch\u00e4uble and P\u00e1raic Sheridan. 1997. Cross- language information retrieval (CLIR) track overview. In Proc. of TREC.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "FEVER: a large-scale dataset for fact extraction and VERification",
"authors": [
{
"first": "James",
"middle": [],
"last": "Thorne",
"suffix": ""
},
{
"first": "Andreas",
"middle": [],
"last": "Vlachos",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "James Thorne, Andreas Vlachos, Christos Christodoulopoulos, and Arpit Mittal. 2018. FEVER: a large-scale dataset for fact extraction and VERification. In Proc. of NAACL.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "MIA 2022 shared task submission: Leveraging entity representations, dense-sparse hybrids, and fusion-indecoder for cross-lingual question",
"authors": [
{
"first": "Zhucheng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sarguna Janani Padmanabhan",
"suffix": ""
}
],
"year": 2022,
"venue": "Proc. of MIA",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhucheng Tu and Sarguna Janani Padmanabhan. 2022. MIA 2022 shared task submission: Leveraging entity representations, dense-sparse hybrids, and fusion-in- decoder for cross-lingual question. In Proc. of MIA.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "SNCSE: Contrastive learning for unsupervised sentence embedding with soft negative samples",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yangguang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Dou",
"suffix": ""
},
{
"first": "Lingpeng",
"middle": [],
"last": "Kong",
"suffix": ""
},
{
"first": "Jing",
"middle": [],
"last": "Shao",
"suffix": ""
}
],
"year": 2022,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Wang, Yangguang Li, Zhen Huang, Yong Dou, Lingpeng Kong, and Jing Shao. 2022. SNCSE: Con- trastive learning for unsupervised sentence embed- ding with soft negative samples.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Sogou neural machine translation systems for WMT17",
"authors": [
{
"first": "Yuguang",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Shanbo",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Liyang",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Muze",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Lin",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Yanfeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hongtao",
"middle": [],
"last": "Yang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuguang Wang, Shanbo Cheng, Liyang Jiang, Jiajun Yang, Wei Chen, Muze Li, Lin Shi, Yanfeng Wang, and Hongtao Yang. 2017. Sogou neural machine translation systems for WMT17. In Proc. of WMT.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual pre-trained text-to-text transformer",
"authors": [
{
"first": "Linting",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Noah",
"middle": [],
"last": "Constant",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Mihir",
"middle": [],
"last": "Kale",
"suffix": ""
},
{
"first": "Rami",
"middle": [],
"last": "Al-Rfou",
"suffix": ""
},
{
"first": "Aditya",
"middle": [],
"last": "Siddhant",
"suffix": ""
}
],
"year": null,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilin- gual pre-trained text-to-text transformer. In Proc. of NAACL.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "EvalAI: Towards better evaluation systems for ai agents",
"authors": [
{
"first": "Deshraj",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "Rishabh",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "Harsh",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Prithvijit",
"middle": [],
"last": "Chattopadhyay",
"suffix": ""
},
{
"first": "Taranjeet",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Akash",
"middle": [],
"last": "Jain",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Shiv Baran",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deshraj Yadav, Rishabh Jain, Harsh Agrawal, Prithvi- jit Chattopadhyay, Taranjeet Singh, Akash Jain, Shiv Baran Singh, Stefan Lee, and Dhruv Batra. 2019. EvalAI: Towards better evaluation systems for ai agents.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "SituatedQA: Incorporating extra-linguistic contexts into QA",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Zhang and Eunsol Choi. 2021. SituatedQA: Incorporating extra-linguistic contexts into QA. In Proc. of EMNLP.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Mr. TyDi: A multi-lingual benchmark for dense retrieval",
"authors": [
{
"first": "Xinyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xueguang",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2021,
"venue": "Proc. of MRL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinyu Zhang, Xueguang Ma, Peng Shi, and Jimmy Lin. 2021. Mr. TyDi: A multi-lingual benchmark for dense retrieval. In Proc. of MRL.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "7"
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "XOR-TyDi QA performance vs. answer overlap between train and test sets."
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Performance vs. answer overlap between train and test sets."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "(a) MKQA Answer agreement. (b) Per-category agreement (En).(c) Per-category agreement (Ja)."
},
"FIGREF4": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Answer agreements of the four submitted systems."
},
"TABREF0": {
"content": "<table><tr><td/><td colspan=\"2\">Language Family</td><td/><td># of examples</td><td/><td># Wiki. passages</td></tr><tr><td>Language</td><td>Family</td><td>Branch</td><td colspan=\"2\">Train Development</td><td>Test</td><td/></tr><tr><td>Arabic (ar) Bengali (bn) English (en) Spanish (es) Finnish (fi) Japanese (ja) Khmer (km) Korean (ko) Malay (ms) Russian (ru) Swedish (sv) Chinese (zh) Telugu (te)</td><td colspan=\"2\">Afro-Asiatic Indo-European Indo-Iranian Semitic Indo-European Germanic Indo-European Italic Uralic Finnic Japonic Japonic Austroasiatic Khmer Koreanic Han Austronesian Malayo-Poly. Indo-European Balto-Slavic Indo-Europea Germanic Sino-Tibetan Sinitic Dravidian South-Central</td><td>18,402 5,007 76,635 0 9,762 7,815 0 4,319 0 9,290 0 0 6,759</td><td colspan=\"2\">3,145 5,590 2,248 5,203 1,758 5,000 1,758 5,000 2,732 1,368 2,451 6,056 1,758 5,000 2,231 6,048 1,758 5,000 2,776 6,910 1,758 5,000 1,758 5,000 2,322 6,873</td><td>1,304,828 179,936 18,003,200 5,738,484 886,595 5,116,905 63,037 638,864 397,396 4,545,635 4,525,695 3,394,943 274,230</td></tr><tr><td>Surprise Languages Tagalog (tl) Tamil (ta)</td><td>Austronesian Dravidian</td><td>Malayo-Poly. Southern</td><td>0 0</td><td>0 0</td><td>350 350</td><td>--</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF1": {
"content": "<table><tr><td>3.1 Source Datasets</td></tr><tr><td>XOR-TyDi QA (Asai et al., 2021a) is a cross-lingual open-retrieval QA dataset covering 7 lan-</td></tr><tr><td>guages built upon TyDi QA (Clark et al., 2020).</td></tr><tr><td>Asai et al. (2021a) collect answers for questions</td></tr><tr><td>in TyDi QA that are unanswerable using the same-</td></tr><tr><td>language Wikipedia. As the questions are inher-</td></tr><tr><td>ited from TyDi QA, they are written by native</td></tr><tr><td>speakers to better reflect their own interests and</td></tr><tr><td>linguistic phenomena, and they are not parallel</td></tr><tr><td>across languages. We use data for the XOR-full</td></tr><tr><td>setting, where some questions can be answered</td></tr><tr><td>based on the target language's Wikipedia (monolin-</td></tr><tr><td>gual) while others require evidence only presented</td></tr><tr><td>in English Wikipedia (cross-lingual). We use all of</td></tr><tr><td>the 7 languages covered by XOR-TyDi QA: Ara-</td></tr><tr><td>bic (ar), Bengali (bn), Finnish (fi), Japanese (ja),</td></tr><tr><td>Korean (ko), Russian (ru), Telugu (te).</td></tr><tr><td>MKQA (Longpre et al., 2021) comprises the largest set of languages and dialects (26) for open-</td></tr><tr><td>retrieval QA, spanning 14 language families. There</td></tr><tr><td>are 10k question and answer pairs per language.</td></tr><tr><td>The questions are human-translated from English</td></tr><tr><td>Natural Questions</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "List of the languages, their families and amount of data available in the MIA shared task data. The last two languages are surprise languages hidden from the participants."
},
"TABREF3": {
"content": "<table><tr><td>sys</td><td/><td/><td/><td/><td/><td colspan=\"2\">Language F1</td><td/><td/><td/><td/><td/><td/></tr><tr><td>ar</td><td>en</td><td>es</td><td>fi</td><td>ko</td><td>ma</td><td>ja</td><td>km</td><td>ru</td><td>sv</td><td>tr</td><td>zh</td><td>tm</td><td>ta</td></tr><tr><td>(a) 12</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "Final results on the XOR-TyDi QA subsets of the MIA 2022 shared task. The grayed entry indicates an unconstrained setting. .67 39.63 30.85 25.22 12.81 29.09 20.49 2.36 18.82 29.62 26.16 22.60 20.75 20.95 (b) 13.94 42.58 32.11 26.75 14.59 31.13 22.72 8.71 22.36 31.48 26.59 18.00 2.74 26.42 (c) 8.73 35.32 25.54 20.42 6.78 24.10 14.27 6.06 12.01 25.97 20.27 13.95 0.00 11.14 (d) 9.52 36.34 27.23 22.70 7.68 25.11 15.89 6.00 14.60 26.69 21.66 13.78 0.00 12.78 (e) 13.62 33.24 28.98 25.26 13.07 29.04 23.11 3.96 20.11 29.75 28.15 11.30 0.00 0.00"
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
},
"TABREF7": {
"content": "<table><tr><td>: Final results on MIA 2022 Shared Task XOR-TyDi QA cross-lingual (\"cl\") / monolingual subsets (\"m\"). Systems (a), (b), (c) and (d) are Texttron, mLUKE-FID, CMUmQA, and ZusammenQA, respectively.</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": ""
}
}
}
}