The project aims to train sentence embedding models on very large sentence level datasets using a self-supervised
contrastive learning objective. We used the pretrained
MiniLM-L12 model and fine-tuned in on a
1B sentence pairs dataset. We use a contrastive learning objective: given a sentence from the pair, the model should predict which out of a set of randomly sampled other sentences, was actually paired with it in our dataset.
We developped this model during the Community week using JAX/Flax for NLP & CV, organized by Hugging Face. We developped this model as part of the project: Train the Best Sentence Embedding Model Ever with 1B Training Pairs. We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as intervention from Google’s Flax, JAX, and Cloud team member about efficient deep learning frameworks.
Our model is intented to be used as a sentence encoder. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for information retrieval, clustering or sentence similarity tasks.
How to use
Here is how to use this model to get the features of a given text using SentenceTransformers library:
from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/all_datasets_v3_MiniLM-L12') text = "Replace me by any text you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32)
We use the pretrained
MiniLM-L12. Please refer to the model
card for more detailed information about the pre-training procedure.
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs.
We trained ou model on a TPU v3-8. We train the model during 540k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository.
We use the concatenation from multiple datasets to fine-tune our model. The total number of sentence pairs is above 1 billion sentences.
We sampled each dataset given a weighted probability which configuration is detailed in the
|Dataset||Paper||Number of training tuples|
|GOOAQ: Open Question Answering with Diverse Answer Types||paper||3,012,496|
|[COCO 2020](COCO 2020)||paper||828,395|
|Natural Questions (NQ)||paper||100,231|
|Quora Question Pairs||-||103,663|
|AllNLI (SNLI and MultiNLI||paper SNLI, paper MultiNLI||277,230|
|Yahoo Answers Title/Answer||paper||1,198,260|
|Yahoo Answers Title/Question||paper||659,896|
|Yahoo Answers Question/Answer||paper||681,164|
- Downloads last month