nreimers commited on
Commit
ca3b3e9
1 Parent(s): d869fb6

Add new SentenceTransformer model.

Browse files
1_Pooling/config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 768,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
2_Dense/config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"in_features": 768, "out_features": 768, "bias": false, "activation_function": "torch.nn.modules.linear.Identity"}
2_Dense/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5365f1d15b18a7ae7d2e500bf404b6d3cf45397afe1a2086641ff56bc5d7d1a
3
+ size 2360171
README.md CHANGED
@@ -1,95 +1,69 @@
1
- # Sentence Embedding Model for MS MARCO Passage Retrieval
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
 
 
3
 
4
- This a `distilroberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
5
 
6
- You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html).
7
 
8
- This model was optimized to be used with **dot-product** as similarity function between queries and documents.
9
 
 
10
 
11
- ## Training
12
-
13
- Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
14
-
15
- ## Performance
16
 
17
- For performance details, see: [SBERT.net - Pre-Trained Models - MS MARCO](https://www.sbert.net/docs/pretrained-models/msmarco-v3.html)
 
 
18
 
19
- ## Usage (HuggingFace Models Repository)
20
 
21
- You can use the model directly from the model repository to compute sentence embeddings:
22
  ```python
23
- from transformers import AutoTokenizer, AutoModel
24
- import torch
25
-
26
-
27
- #Mean Pooling - Take attention mask into account for correct averaging
28
- def mean_pooling(model_output, attention_mask):
29
- token_embeddings = model_output[0] #First element of model_output contains all token embeddings
30
- input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
31
- sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
32
- sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
33
- return sum_embeddings / sum_mask
34
 
 
 
 
 
35
 
36
 
37
- # Queries we want embeddings for
38
- queries = ['What is the capital of France?', 'How many people live in New York City?']
39
 
40
- # Passages that provide answers
41
- passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
42
 
43
- #Load AutoModel from huggingface model repository
44
- tokenizer = AutoTokenizer.from_pretrained("model_name")
45
- model = AutoModel.from_pretrained("model_name")
46
 
47
- def compute_embeddings(sentences):
48
- #Tokenize sentences
49
- encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
50
 
51
- #Compute query embeddings
52
- with torch.no_grad():
53
- model_output = model(**encoded_input)
54
 
55
- #Perform pooling. In this case, mean pooling
56
- return mean_pooling(model_output, encoded_input['attention_mask'])
57
 
58
- query_embeddings = compute_embeddings(queries)
59
- passage_embeddings = compute_embeddings(passages)
60
- ```
61
 
62
- ## Usage (Sentence-Transformers)
63
- Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
64
  ```
65
- pip install -U sentence-transformers
 
 
 
 
66
  ```
67
 
68
- Then you can use the model like this:
69
- ```python
70
- from sentence_transformers import SentenceTransformer
71
- model = SentenceTransformer('model_name')
72
-
73
- # Queries we want embeddings for
74
- queries = ['What is the capital of France?', 'How many people live in New York City?']
75
-
76
- # Passages that provide answers
77
- passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
78
-
79
- query_embeddings = model.encode(queries)
80
- passage_embeddings = model.encode(passages)
81
- ```
82
-
83
- ## Changes in v3
84
- The models from v2 have been used for find for all training queries similar passages. An [MS MARCO Cross-Encoder](ce-msmarco.md) based on the electra-base-model has been then used to classify if these retrieved passages answer the question.
85
-
86
- If they received a low score by the cross-encoder, we saved them as hard negatives: They got a high score from the bi-encoder, but a low-score from the (better) cross-encoder.
87
-
88
- We then trained the v2 models with these new hard negatives.
89
-
90
  ## Citing & Authors
 
 
 
91
  If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
92
- ```
93
  @inproceedings{reimers-2019-sentence-bert,
94
  title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
95
  author = "Reimers, Nils and Gurevych, Iryna",
 
1
+ ---
2
+ pipeline_tag: sentence-similarity
3
+ tags:
4
+ - sentence-transformers
5
+ - feature-extraction
6
+ - sentence-similarity
7
+ - transformers
8
+ - transformers
9
+ - transformers
10
+ - transformers
11
+ - transformers
12
+ - transformers
13
+ - transformers
14
+ - transformers
15
+ ---
16
 
17
+ # sentence-transformers/msmarco-distilbert-base-dot-prod-v3
18
 
19
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
20
 
 
21
 
 
22
 
23
+ ## Usage (Sentence-Transformers)
24
 
25
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
 
 
 
 
26
 
27
+ ```
28
+ pip install -U sentence-transformers
29
+ ```
30
 
31
+ Then you can use the model like this:
32
 
 
33
  ```python
34
+ from sentence_transformers import SentenceTransformer
35
+ sentences = ["This is an example sentence", "Each sentence is converted"]
 
 
 
 
 
 
 
 
 
36
 
37
+ model = SentenceTransformer('sentence-transformers/msmarco-distilbert-base-dot-prod-v3')
38
+ embeddings = model.encode(sentences)
39
+ print(embeddings)
40
+ ```
41
 
42
 
 
 
43
 
44
+ ## Evaluation Results
 
45
 
 
 
 
46
 
 
 
 
47
 
48
+ For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=sentence-transformers/msmarco-distilbert-base-dot-prod-v3)
 
 
49
 
 
 
50
 
 
 
 
51
 
52
+ ## Full Model Architecture
 
53
  ```
54
+ SentenceTransformer(
55
+ (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
56
+ (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
57
+ (2): Dense({'in_features': 768, 'out_features': 768, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'})
58
+ )
59
  ```
60
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61
  ## Citing & Authors
62
+
63
+ This model was trained by [sentence-transformers](https://www.sbert.net/).
64
+
65
  If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
66
+ ```bibtex
67
  @inproceedings{reimers-2019-sentence-bert,
68
  title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
69
  author = "Reimers, Nils and Gurevych, Iryna",
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "../output/distilbert-base-uncased-mined_hard_neg-mean-pooling-dot_prod-no_identifier-epoch10-batchsize75-2021-03-21_13-53-07/0_Transformer",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertModel"
@@ -18,6 +18,6 @@
18
  "seq_classif_dropout": 0.2,
19
  "sinusoidal_pos_embds": false,
20
  "tie_weights_": true,
21
- "transformers_version": "4.4.1",
22
  "vocab_size": 30522
23
  }
 
1
  {
2
+ "_name_or_path": "old_models/msmarco-distilbert-base-dot-prod-v3/0_Transformer",
3
  "activation": "gelu",
4
  "architectures": [
5
  "DistilBertModel"
 
18
  "seq_classif_dropout": 0.2,
19
  "sinusoidal_pos_embds": false,
20
  "tie_weights_": true,
21
+ "transformers_version": "4.7.0",
22
  "vocab_size": 30522
23
  }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.0.0",
4
+ "transformers": "4.7.0",
5
+ "pytorch": "1.9.0+cu102"
6
+ }
7
+ }
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Dense",
18
+ "type": "sentence_transformers.models.Dense"
19
+ }
20
+ ]
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0d2805041081f6c6e0bd3c903a6c2b5a23551ec8063b91bcee90d05478cdaa71
3
- size 265491187
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e67d89134719a423d5978c8760b259f4e3106dc6b25c6bac4ccb50e7fbbeda38
3
+ size 265486777
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json CHANGED
@@ -1 +1 @@
1
- {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "../output/distilbert-base-uncased-mined_hard_neg-mean-pooling-dot_prod-no_identifier-epoch10-batchsize75-2021-03-21_13-53-07/0_Transformer", "do_basic_tokenize": true, "never_split": null}
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "old_models/msmarco-distilbert-base-dot-prod-v3/0_Transformer", "do_basic_tokenize": true, "never_split": null}