SentenceTransformers is a set of models and frameworks that enable training and generating sentence embeddings from given data. The generated sentence embeddings can be utilized for Clustering, Semantic Search and other tasks. We used a pretrained mpnet-base model and trained it using Siamese Network setup and contrastive learning objective. Question and answer pairs from StackExchange was used as training data to make the model robust to Question / Answer embedding similarity.
We developped this model during the Community week using JAX/Flax for NLP & CV, organized by Hugging Face. We developped this model as part of the project: Train the Best Sentence Embedding Model Ever with 1B Training Pairs. We benefited from efficient hardware infrastructure to run the project: 7 TPUs v3-8, as well as assistance from Google’s Flax, JAX, and Cloud team members about efficient deep learning frameworks.
Our model is intended to be used as a sentence encoder for a search engine. Given an input sentence, it ouptuts a vector which captures the sentence semantic information. The sentence vector may be used for semantic-search, clustering or sentence similarity tasks.
Here is how to use this model to get the features of a given text using SentenceTransformers library:
from sentence_transformers import SentenceTransformer model = SentenceTransformer('flax-sentence-embeddings/mpnet_stackexchange_v1') text = "Replace me by any question / answer you'd like." text_embbedding = model.encode(text) # array([-0.01559514, 0.04046123, 0.1317083 , 0.00085931, 0.04585106, # -0.05607086, 0.0138078 , 0.03569756, 0.01420381, 0.04266302 ...], # dtype=float32)
We use the pretrained
Mpnet-base. Please refer to the model
card for more detailed information about the pre-training procedure.
We fine-tune the model using a contrastive objective. Formally, we compute the cosine similarity from each possible sentence pairs from the batch. We then apply the cross entropy loss by comparing with true pairs.
We trained on model on a TPU v3-8. We train the model during 80k steps using a batch size of 1024 (128 per TPU core). We use a learning rate warm up of 500. The sequence length was limited to 128 tokens. We used the AdamW optimizer with a 2e-5 learning rate. The full training script is accessible in this current repository.
We used the concatenation from multiple Stackexchange Question-Answer datasets to fine-tune our model. We sampled each StackExchange given a weighted probability of following equation.
int((stackexchange_length[path] / total_stackexchange_length) * total_weight)
MSMARCO, NQ & other question-answer datasets were also used. Sampling ratio for StackExchange vs remaining : 2 vs 1.
- Downloads last month