nreimers commited on
Commit
5ef767d
1 Parent(s): 2040317
README.md ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Sentence Embedding Model for MS MARCO Passage Retrieval
2
+
3
+
4
+ This a `roberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
5
+
6
+ You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) and [SBERT.net - Information Retrieval](https://www.sbert.net/examples/applications/information-retrieval/README.html)
7
+
8
+
9
+ ## Training
10
+
11
+ Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
12
+
13
+ ## Usage (HuggingFace Models Repository)
14
+
15
+ You can use the model directly from the model repository to compute sentence embeddings:
16
+ ```python
17
+ from transformers import AutoTokenizer, AutoModel
18
+ import torch
19
+
20
+
21
+ #Mean Pooling - Take attention mask into account for correct averaging
22
+ def mean_pooling(model_output, attention_mask):
23
+ token_embeddings = model_output[0] #First element of model_output contains all token embeddings
24
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
25
+ sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
26
+ sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
27
+ return sum_embeddings / sum_mask
28
+
29
+
30
+
31
+ # Queries we want embeddings for
32
+ queries = ['What is the capital of France?', 'How many people live in New York City?']
33
+
34
+ # Passages that provide answers
35
+ passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
36
+
37
+ #Load AutoModel from huggingface model repository
38
+ tokenizer = AutoTokenizer.from_pretrained("model_name")
39
+ model = AutoModel.from_pretrained("model_name")
40
+
41
+ def compute_embeddings(sentences):
42
+ #Tokenize sentences
43
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
44
+
45
+ #Compute query embeddings
46
+ with torch.no_grad():
47
+ model_output = model(**encoded_input)
48
+
49
+ #Perform pooling. In this case, mean pooling
50
+ return mean_pooling(model_output, encoded_input['attention_mask'])
51
+
52
+ query_embeddings = compute_embeddings(queries)
53
+ passage_embeddings = compute_embeddings(passages)
54
+ ```
55
+
56
+ ## Usage (Sentence-Transformers)
57
+ Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
58
+ ```
59
+ pip install -U sentence-transformers
60
+ ```
61
+
62
+ Then you can use the model like this:
63
+ ```python
64
+ from sentence_transformers import SentenceTransformer
65
+ model = SentenceTransformer('model_name')
66
+
67
+ # Queries we want embeddings for
68
+ queries = ['What is the capital of France?', 'How many people live in New York City?']
69
+
70
+ # Passages that provide answers
71
+ passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
72
+
73
+ query_embeddings = model.encode(queries)
74
+ passage_embeddings = model.encode(passages)
75
+ ```
76
+
77
+
78
+ ## Citing & Authors
79
+ If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
80
+ ```
81
+ @inproceedings{reimers-2019-sentence-bert,
82
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
83
+ author = "Reimers, Nils and Gurevych, Iryna",
84
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
85
+ month = "11",
86
+ year = "2019",
87
+ publisher = "Association for Computational Linguistics",
88
+ url = "http://arxiv.org/abs/1908.10084",
89
+ }
90
+ ```
config.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "roberta-base",
3
+ "architectures": [
4
+ "RobertaModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "bos_token_id": 0,
8
+ "eos_token_id": 2,
9
+ "gradient_checkpointing": false,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout_prob": 0.1,
12
+ "hidden_size": 768,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 514,
17
+ "model_type": "roberta",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "type_vocab_size": 1,
22
+ "vocab_size": 50265
23
+ }
merges.txt ADDED
The diff for this file is too large to render. See raw diff
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b89a31974e98605c6166a4acb0ae8c8c90e039b65418d89842f6cf5a1375f5c7
3
+ size 498669047
sentence_bert_config.json ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ {
2
+ "max_seq_length": 250
3
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true}}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "name_or_path": "roberta-base"}
vocab.json ADDED
The diff for this file is too large to render. See raw diff