{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T03:13:50.562239Z" }, "title": "Cross-Lingual Training of Dense Retrievers for Document Retrieval", "authors": [ { "first": "Peng", "middle": [], "last": "Shi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "peng.shi@uwaterloo.ca" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Waterloo", "location": {} }, "email": "jimmylin@uwaterloo.ca" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Dense retrieval has shown great success for passage ranking in English. However, its effectiveness for non-English languages remains unexplored due to limitation in training resources. In this work, we explore different transfer techniques for document ranking from English annotations to non-English languages. Our experiments reveal that zero-shot modelbased transfer using mBERT improves search quality. We find that weakly-supervised target language transfer is competitive compared to generation-based target language transfer, which requires translation models.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Dense retrieval has shown great success for passage ranking in English. However, its effectiveness for non-English languages remains unexplored due to limitation in training resources. In this work, we explore different transfer techniques for document ranking from English annotations to non-English languages. Our experiments reveal that zero-shot modelbased transfer using mBERT improves search quality. We find that weakly-supervised target language transfer is competitive compared to generation-based target language transfer, which requires translation models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Dense retrieval uses dense vector representations for semantic encoding and matching. However, most existing work focuses on high-resource languages such as English, where large-scale annotations are readily accessible. In this work, we focus on improving term-matching retrieval for low(er)-resource languages. We explore techniques for leveraging relevance judgments in English to train dense retrievers for document retrieval in non-English languages. Our experimental results show that combining dense retrieval and term-matching retrieval can obtain effectiveness improvements. Also, weakly-supervised target language transfer yields effectiveness competitive to generationbased target language transfer. This extended abstracted is an abridged version of Shi et al. (2021) .", "cite_spans": [ { "start": 761, "end": 778, "text": "Shi et al. (2021)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We leverage the DPR model of Karpukhin et al. (2020) , but using mBERT as the backbone model. During inference, we apply both bag-of-words exact term matching and dense retrieval. The relevance score of each document combines termmatching scores with dense retrieval similarity via S doc \" \u03b1\u00a8S term`p 1\u00b4\u03b1q\u00a8S dense , where \u03b1 is tuned via cross-validation.", "cite_spans": [ { "start": 29, "end": 52, "text": "Karpukhin et al. (2020)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Relevance Transfer", "sec_num": "2" }, { "text": "Model-Based Transfer. By exploiting the zeroshot cross-lingual transfer ability of pretrained transformers such as mBERT (Devlin et al., 2019) , we train the dense retriever in the source language and apply inference directly to the target languages.", "cite_spans": [ { "start": 121, "end": 142, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Relevance Transfer", "sec_num": "2" }, { "text": "Target Language Transfer. To bridge the language gap between training and inference, we explore two techniques for creating a target language transfer set. (1) Generation-based query synthesis, where the goal is to leverage powerful generation models to predict reasonable queries given documents in the target language. We choose mBART (Liu et al., 2020) as our query generation model. The input of the model is the passage and its learning target is the corresponding query. We use the translate-train technique to obtain the generation models. More specifically, we leverage Google Translate to translate English querydocument pairs into the target languages. Then, we use passages in the target language collections as input and generate corresponding queries in the same language. (2) Weakly-supervised query synthesis. We can automatically build the target language transfer set without manual annotation effort by treating the titles of Wikipedia articles as queries and the corresponding documents as positive candidates. We also retrieve top 1000 documents with BM25 for each query; documents except for the positive candidate are labeled as negative.", "cite_spans": [ { "start": 337, "end": 355, "text": "(Liu et al., 2020)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Relevance Transfer", "sec_num": "2" }, { "text": "Two-Stage Training. We apply two-stage training to learn the dense retrieval model. The encoders are first trained on source language (English) annotated data which are available in larger quantities; then the models are fine-tuned on the synthesized querydocument pairs in the target language.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Cross-Lingual Relevance Transfer", "sec_num": "2" }, { "text": "We conduct experiments on six test collections: NT-CIR 8 in Chinese, TREC 2002 in Arabic, CLEF 2006 in French, FIRE 2012 1show the effectiveness of BM25 and BM25 with RM3 query expansion. For each language, we select the higher P@20 of the two models as the term-based matching baseline. That is, for the French, Bengali, and Spanish collections, we use BM25+RM3 as the term-based matching baseline and for the others, we use BM25. Significant gains against the baselines are denoted with \u0132.", "cite_spans": [ { "start": 69, "end": 78, "text": "TREC 2002", "ref_id": null }, { "start": 79, "end": 99, "text": "in Arabic, CLEF 2006", "ref_id": null }, { "start": 100, "end": 120, "text": "in French, FIRE 2012", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "in Bengali, TREC 3 in Spanish. For the evaluation metrics, we adopt AP, P@20, and nDCG@20. For model-based transfer, we explore Natural Question and MS MARCO as training datasets. For training the query generator in the target languages, we obtain training data by sampling 2000 query-passage pairs from MS MARCO and translate them into the target languages. Fisher's two-sided, paired randomization test (Smucker et al., 2007) at p \u0103 0.05 was applied to test for statistical significance.", "cite_spans": [ { "start": 405, "end": 427, "text": "(Smucker et al., 2007)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "3" }, { "text": "Finding #1: Zero-shot model-based transfer improves term-based matching. The results of zero-shot model-based transfer are shown in Model (2) and Model (3). Comparing with the corresponding baselines, we observe that model-based transfer, either NQ zero-shot or MS zero-shot, can improve retrieval effectiveness on P@20 for all collections, except NQ zero-shot on the TREC3-es dataset. We do not observe a clear winner between NQ and MS, though. These results indicate that mBERT-based DPR effectively transfers relevance matching across languages.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussions", "sec_num": "4" }, { "text": "Finding #2: Target language transfer benefits certain collections, and Wiki query synthesis is better than query generation. Target language transfer results are shown in Model (4) and Model (5). MS \u00d1 QGen and MS \u00d1 Wiki denote the two-stage training strategy with differ-ent transfer sets, where QGen denotes generationbased query synthesis and Wiki denotes weaklysupervised query synthesis from Wikipedia. By comparing Model (4) with Model (3), we observe that second-stage training with generation-based query-document pairs can improve the effectiveness of P@20 over zero-shot model-based transfer on the Chinese, French, Hindi, Bengali, and Spanish collections. However, we see little improvement in terms of AP for all collections. By comparing Model (5) with Model (3), we find that secondstage training with weakly-supervised training data can improve P@20 over the zero-shot baselines on French, Hindi, Bengali, and Spanish. Furthermore, by comparing these two transfer sets, we observe that, except for Chinese, Wiki obtains better retrieval effectiveness than QGen, which requires translation models for the target languages (which are expensive to build and not available for all languages).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Results and Discussions", "sec_num": "4" }, { "text": "We investigate the effectiveness of three transfer techniques for document ranking from English training data to non-English target languages. Our experiments in six languages demonstrate that zeroshot transfer using mBERT-based dense retrieval models improves term-based matching methods, and fine-tuning on augmented data in target languages can further benefit certain collections.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Dense passage retrieval for open-domain question answering", "authors": [ { "first": "Vladimir", "middle": [], "last": "Karpukhin", "suffix": "" }, { "first": "Barlas", "middle": [], "last": "Oguz", "suffix": "" }, { "first": "Sewon", "middle": [], "last": "Min", "suffix": "" }, { "first": "Patrick", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Ledell", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Danqi", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Wen-Tau", "middle": [], "last": "Yih", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "6769--6781", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. In Proceedings of the 2020 Conference on Empirical Methods in Nat- ural Language Processing (EMNLP), pages 6769- 6781, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Multilingual denoising pre-training for neural machine translation", "authors": [ { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jiatao", "middle": [], "last": "Gu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Xian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Edunov", "suffix": "" }, { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Transactions of the Association for Computational Linguistics", "volume": "8", "issue": "", "pages": "726--742", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yinhan Liu, Jiatao Gu, Naman Goyal, Xian Li, Sergey Edunov, Marjan Ghazvininejad, Mike Lewis, and Luke Zettlemoyer. 2020. Multilingual denoising pre-training for neural machine translation. Transac- tions of the Association for Computational Linguis- tics, 8:726-742.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Cross-lingual training with dense retrieval for document retrieval", "authors": [ { "first": "Peng", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "He", "middle": [], "last": "Bai", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2021, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2109.01628" ] }, "num": null, "urls": [], "raw_text": "Peng Shi, Rui Zhang, He Bai, and Jimmy Lin. 2021. Cross-lingual training with dense retrieval for docu- ment retrieval. arXiv preprint arXiv:2109.01628.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A comparison of statistical significance tests for information retrieval evaluation", "authors": [ { "first": "D", "middle": [], "last": "Mark", "suffix": "" }, { "first": "James", "middle": [], "last": "Smucker", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Allan", "suffix": "" }, { "first": "", "middle": [], "last": "Carterette", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the sixteenth ACM conference on Conference on information and knowledge management", "volume": "", "issue": "", "pages": "623--632", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mark D Smucker, James Allan, and Ben Carterette. 2007. A comparison of statistical significance tests for information retrieval evaluation. In Proceed- ings of the sixteenth ACM conference on Conference on information and knowledge management, pages 623-632.", "links": null } }, "ref_entries": { "TABREF0": { "num": null, "content": "
APP@20nDCGAPP@20nDCGAPP@20nDCG
ModelNTCIR8-zhTREC2002-arCLEF2006-fr
p0q BM250.40140.38490.47570.29320.36100.40560.31110.3184 0.4458
p1q BM25+RM30.33840.36160.44900.27830.34900.39690.34210.3408 0.4658
p2q 0.35600.40120.34700.3469 0.4726
p3q MS zero-shot 0.4167 \u0132 0.4164 \u0132 0.5095 \u0132 0.30240.3810 \u0132 0.42850.33320.3418 0.4573
p4q MS \u00d1 QGen 0.4258 \u0132 0.4336 \u0132 0.5308 \u0132 0.29880.38000.42760.33310.3429 0.4564
p5q MS \u00d1 Wiki0.41350.4123 \u0132 0.5055 \u0132 0.3060 \u0132 0.37500.42930.34560.3480 0.4743
FIRE2012-hiFIRE2012-bnTREC3-es
p0q BM250.38670.44700.53100.28810.37400.42610.41970.6660 0.6851
p1q +RM30.36600.44300.52770.28330.38300.43510.49120.7040 0.7079
p2q NQ zero-shot 0.39390.45600.54080.28980.39800.4495 \u0132 0.49100.6980 0.7007
p3q MS zero-shot 0.39440.45800.54610.2896 \u0132 0.39000.44490.49500.7080 0.7171
p4q MS \u00d1 QGen 0.39410.46600.55270.28870.39800.44860.4958 \u0132 0.7180 0.7239
p5q MS \u00d1 Wiki0.39500.46300.54970.2898 \u0132 0.40500.45490.4972 \u0132 0.7180 0.7329
in Hindi, FIRE 2012
", "type_str": "table", "text": "NQ zero-shot 0.4221 \u0132 0.4164 \u0132 0.5235 \u0132 0.2943", "html": null } } } }