Papers
arxiv:2407.18887

Embedding And Clustering Your Data Can Improve Contrastive Pretraining

Published on Jul 26, 2024
Authors:

Abstract

Recent studies of large-scale contrastive pretraining in the text embedding domain show that using single-source minibatches, rather than mixed-source minibatches, can substantially improve overall model accuracy. In this work, we explore extending training data stratification beyond source granularity by leveraging a pretrained <PRE_TAG>text embedding model</POST_TAG> and the classic k-means clustering algorithm to further split training data apart by the semantic clusters within each source. Experimentally, we observe a notable increase in NDCG@10 when pretraining a BERT-based <PRE_TAG>text embedding model</POST_TAG> on query-passage pairs from the MSMARCO passage retrieval dataset. Additionally, we conceptually connect our clustering approach to both the Topic Aware Sampling (TAS) aspect of the TAS-B methodology and the nearest-neighbor-based hard-negative mining aspect of the ANCE methodology and discuss how this unified view motivates future lines of research on the organization of contrastive pretraining data.

Community

Sign up or log in to comment

Models citing this paper 7

Browse 7 models citing this paper

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2407.18887 in a dataset README.md to link it from this page.

Spaces citing this paper 41

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.