nreimers commited on
Commit
0b7568d
1 Parent(s): a963848

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md CHANGED
@@ -1,18 +1,42 @@
1
- # Sentence Embedding Model for MS MARCO Passage Retrieval
 
 
 
 
 
 
 
2
 
 
3
 
4
- This a `distilroberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
5
 
6
- You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html) and [SBERT.net - Information Retrieval](https://www.sbert.net/examples/applications/information-retrieval/README.html)
7
 
8
 
9
- ## Training
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
- Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
12
 
13
- ## Usage (HuggingFace Models Repository)
 
14
 
15
- You can use the model directly from the model repository to compute sentence embeddings:
16
  ```python
17
  from transformers import AutoTokenizer, AutoModel
18
  import torch
@@ -22,62 +46,54 @@ import torch
22
  def mean_pooling(model_output, attention_mask):
23
  token_embeddings = model_output[0] #First element of model_output contains all token embeddings
24
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
25
- sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
26
- sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
27
- return sum_embeddings / sum_mask
28
 
29
 
 
 
30
 
31
- # Queries we want embeddings for
32
- queries = ['What is the capital of France?', 'How many people live in New York City?']
 
33
 
34
- # Passages that provide answers
35
- passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
36
 
37
- #Load AutoModel from huggingface model repository
38
- tokenizer = AutoTokenizer.from_pretrained("model_name")
39
- model = AutoModel.from_pretrained("model_name")
40
 
41
- def compute_embeddings(sentences):
42
- #Tokenize sentences
43
- encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
44
 
45
- #Compute query embeddings
46
- with torch.no_grad():
47
- model_output = model(**encoded_input)
48
 
49
- #Perform pooling. In this case, mean pooling
50
- return mean_pooling(model_output, encoded_input['attention_mask'])
51
 
52
- query_embeddings = compute_embeddings(queries)
53
- passage_embeddings = compute_embeddings(passages)
54
- ```
55
 
56
- ## Usage (Sentence-Transformers)
57
- Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
58
- ```
59
- pip install -U sentence-transformers
60
- ```
61
 
62
- Then you can use the model like this:
63
- ```python
64
- from sentence_transformers import SentenceTransformer
65
- model = SentenceTransformer('model_name')
66
 
67
- # Queries we want embeddings for
68
- queries = ['What is the capital of France?', 'How many people live in New York City?']
69
 
70
- # Passages that provide answers
71
- passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
72
 
73
- query_embeddings = model.encode(queries)
74
- passage_embeddings = model.encode(passages)
75
- ```
76
 
77
 
 
 
 
 
 
 
 
 
78
  ## Citing & Authors
 
 
 
79
  If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
80
- ```
81
  @inproceedings{reimers-2019-sentence-bert,
82
  title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
83
  author = "Reimers, Nils and Gurevych, Iryna",
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - transformers
8
+ ---
9
 
10
+ # sentence-transformers/msmarco-distilroberta-base-v2
11
 
12
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
13
 
 
14
 
15
 
16
+ ## Usage (Sentence-Transformers)
17
+
18
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
19
+
20
+ ```
21
+ pip install -U sentence-transformers
22
+ ```
23
+
24
+ Then you can use the model like this:
25
+
26
+ ```python
27
+ from sentence_transformers import SentenceTransformer
28
+ sentences = ["This is an example sentence", "Each sentence is converted"]
29
+
30
+ model = SentenceTransformer('sentence-transformers/msmarco-distilroberta-base-v2')
31
+ embeddings = model.encode(sentences)
32
+ print(embeddings)
33
+ ```
34
+
35
 
 
36
 
37
+ ## Usage (HuggingFace Transformers)
38
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
39
 
 
40
  ```python
41
  from transformers import AutoTokenizer, AutoModel
42
  import torch
46
  def mean_pooling(model_output, attention_mask):
47
  token_embeddings = model_output[0] #First element of model_output contains all token embeddings
48
  input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
49
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
 
 
50
 
51
 
52
+ # Sentences we want sentence embeddings for
53
+ sentences = ['This is an example sentence', 'Each sentence is converted']
54
 
55
+ # Load model from HuggingFace Hub
56
+ tokenizer = AutoTokenizer.from_pretrained('sentence-transformers/msmarco-distilroberta-base-v2')
57
+ model = AutoModel.from_pretrained('sentence-transformers/msmarco-distilroberta-base-v2')
58
 
59
+ # Tokenize sentences
60
+ encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
61
 
62
+ # Compute token embeddings
63
+ with torch.no_grad():
64
+ model_output = model(**encoded_input)
65
 
66
+ # Perform pooling. In this case, max pooling.
67
+ sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
 
68
 
69
+ print("Sentence embeddings:")
70
+ print(sentence_embeddings)
71
+ ```
72
 
 
 
73
 
 
 
 
74
 
75
+ ## Evaluation Results
 
 
 
 
76
 
 
 
 
 
77
 
 
 
78
 
79
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilroberta-base-v2)
 
80
 
 
 
 
81
 
82
 
83
+ ## Full Model Architecture
84
+ ```
85
+ SentenceTransformer(
86
+ (0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: RobertaModel
87
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
88
+ )
89
+ ```
90
+
91
  ## Citing & Authors
92
+
93
+ This model was trained by [sentence-transformers](https://www.sbert.net/).
94
+
95
  If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
96
+ ```bibtex
97
  @inproceedings{reimers-2019-sentence-bert,
98
  title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
99
  author = "Reimers, Nils and Gurevych, Iryna",
config.json CHANGED
@@ -1,4 +1,5 @@
1
  {
 
2
  "architectures": [
3
  "RobertaModel"
4
  ],
@@ -17,6 +18,9 @@
17
  "num_attention_heads": 12,
18
  "num_hidden_layers": 6,
19
  "pad_token_id": 1,
 
 
20
  "type_vocab_size": 1,
 
21
  "vocab_size": 50265
22
  }
1
  {
2
+ "_name_or_path": "old_models/msmarco-distilroberta-base-v2/0_Transformer",
3
  "architectures": [
4
  "RobertaModel"
5
  ],
18
  "num_attention_heads": 12,
19
  "num_hidden_layers": 6,
20
  "pad_token_id": 1,
21
+ "position_embedding_type": "absolute",
22
+ "transformers_version": "4.7.0",
23
  "type_vocab_size": 1,
24
+ "use_cache": true,
25
  "vocab_size": 50265
26
  }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.0.0",
4
+ "transformers": "4.7.0",
5
+ "pytorch": "1.9.0+cu102"
6
+ }
7
+ }
merges.txt CHANGED
@@ -1,4 +1,4 @@
1
- #version: 0.2
2
  Ġ t
3
  Ġ a
4
  h e
1
+ #version: 0.2 - Trained by `huggingface/tokenizers`
2
  Ġ t
3
  Ġ a
4
  h e
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ }
14
+ ]
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5585abb96b5be7a4594a4286a251c71ec43fb6d9f9d738831ca1f3b5649429fb
3
- size 328520407
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:287f1deba2b7737731981f72ecba16ad92cea0d8ded916b99ee5808617fa57af
3
+ size 328515953
sentence_bert_config.json CHANGED
@@ -1,3 +1,4 @@
1
  {
2
- "max_seq_length": 350
 
3
  }
1
  {
2
+ "max_seq_length": 350,
3
+ "do_lower_case": false
4
  }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"model_max_length": 512}
1
+ {"unk_token": {"content": "<unk>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "errors": "replace", "sep_token": {"content": "</s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "cls_token": {"content": "<s>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "pad_token": {"content": "<pad>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "mask_token": {"content": "<mask>", "single_word": false, "lstrip": true, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "model_max_length": 512, "special_tokens_map_file": "old_models/msmarco-distilroberta-base-v2/0_Transformer/special_tokens_map.json", "name_or_path": "old_models/msmarco-distilroberta-base-v2/0_Transformer"}
vocab.json CHANGED
The diff for this file is too large to render. See raw diff