Papers
arxiv:2406.14764

RE-AdaptIR: Improving Information Retrieval through Reverse Engineered Adaptation

Published on Jun 20
· Submitted by will-fleshman on Jun 24
Authors:

Abstract

Large language models (LLMs) fine-tuned for text-retrieval have demonstrated state-of-the-art results across several information retrieval (IR) benchmarks. However, supervised training for improving these models requires numerous labeled examples, which are generally unavailable or expensive to acquire. In this work, we explore the effectiveness of extending reverse engineered adaptation to the context of information retrieval (RE-AdaptIR). We use RE-AdaptIR to improve LLM-based IR models using only unlabeled data. We demonstrate improved performance both in training domains as well as zero-shot in domains where the models have seen no queries. We analyze performance changes in various fine-tuning scenarios and offer findings of immediate use to practitioners.

Community

Paper author Paper submitter

How can you improve text retrieval models with all that unlabeled data lying around? RE-AdaptIR extends reverse engineered adaptation to IR models. RepLLaMA and e5-Mistral are improved by using RE-AdaptIR to isolate IR training from the base models pretraining. Then unlabeled data is used to continue pretraining in the new domain. Finally, the model is readapted back to IR with improved performance thanks to the additional pretraining!

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2406.14764 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2406.14764 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2406.14764 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.