--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # bowdpr_wiki_triviaft This is a fine-tuned retriever on the TriviaQA Task (without distillation). We introduce a novel pre-training paradigm, Bag-of-Word Prediction, for dense retrieval. This retriever is initialized from a base-sized pre-trained model, [bowdpr/bowdpr_wiki](https://huggingface.co/bowdpr/bowdpr_wiki). Please refer to our [paper](https://arxiv.org/abs/2401.11248) for detailed pre-training and fine-tuning settings. Finetuning on QA datasets involves a two-stage pipeline - s1: BM25 negs - s2: BM25 negs + Mined negatives from s1 ## Usage (Sentence-Transformers) This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('bowdpr/bowdpr_wiki_triviaft') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch def cls_pooling(model_output, attention_mask): return model_output[0][:,0] # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('bowdpr/bowdpr_wiki_triviaft') model = AutoModel.from_pretrained('bowdpr/bowdpr_wiki_triviaft') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, cls pooling. sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Full Model Architecture ``` SentenceTransformerforCL( (0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors If you are interested in our work, please consider citing our paper. ``` @misc{ma2024bow_pred, title={Drop your Decoder: Pre-training with Bag-of-Word Prediction for Dense Passage Retrieval}, author={Guangyuan Ma and Xing Wu and Zijia Lin and Songlin Hu}, year={2024}, eprint={2401.11248}, archivePrefix={arXiv}, primaryClass={cs.IR} } ```