nreimers
commited on
Commit
•
d869fb6
1
Parent(s):
a74cd49
upload
Browse files- README.md +102 -0
- config.json +23 -0
- pytorch_model.bin +3 -0
- sentence_bert_config.json +4 -0
- special_tokens_map.json +1 -0
- tokenizer_config.json +1 -0
- vocab.txt +0 -0
README.md
ADDED
@@ -0,0 +1,102 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Sentence Embedding Model for MS MARCO Passage Retrieval
|
2 |
+
|
3 |
+
|
4 |
+
This a `distilroberta-base` model from the [sentence-transformers](https://github.com/UKPLab/sentence-transformers)-repository. It was trained on the [MS MARCO Passage Retrieval dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking): Given a search query, it finds the relevant passages.
|
5 |
+
|
6 |
+
You can use this model for semantic search. Details can be found on: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html).
|
7 |
+
|
8 |
+
This model was optimized to be used with **dot-product** as similarity function between queries and documents.
|
9 |
+
|
10 |
+
|
11 |
+
## Training
|
12 |
+
|
13 |
+
Details about the training of the models can be found here: [SBERT.net - MS MARCO](https://www.sbert.net/examples/training/ms_marco/README.html)
|
14 |
+
|
15 |
+
## Performance
|
16 |
+
|
17 |
+
For performance details, see: [SBERT.net - Pre-Trained Models - MS MARCO](https://www.sbert.net/docs/pretrained-models/msmarco-v3.html)
|
18 |
+
|
19 |
+
## Usage (HuggingFace Models Repository)
|
20 |
+
|
21 |
+
You can use the model directly from the model repository to compute sentence embeddings:
|
22 |
+
```python
|
23 |
+
from transformers import AutoTokenizer, AutoModel
|
24 |
+
import torch
|
25 |
+
|
26 |
+
|
27 |
+
#Mean Pooling - Take attention mask into account for correct averaging
|
28 |
+
def mean_pooling(model_output, attention_mask):
|
29 |
+
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
|
30 |
+
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
|
31 |
+
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
|
32 |
+
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
|
33 |
+
return sum_embeddings / sum_mask
|
34 |
+
|
35 |
+
|
36 |
+
|
37 |
+
# Queries we want embeddings for
|
38 |
+
queries = ['What is the capital of France?', 'How many people live in New York City?']
|
39 |
+
|
40 |
+
# Passages that provide answers
|
41 |
+
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
|
42 |
+
|
43 |
+
#Load AutoModel from huggingface model repository
|
44 |
+
tokenizer = AutoTokenizer.from_pretrained("model_name")
|
45 |
+
model = AutoModel.from_pretrained("model_name")
|
46 |
+
|
47 |
+
def compute_embeddings(sentences):
|
48 |
+
#Tokenize sentences
|
49 |
+
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
|
50 |
+
|
51 |
+
#Compute query embeddings
|
52 |
+
with torch.no_grad():
|
53 |
+
model_output = model(**encoded_input)
|
54 |
+
|
55 |
+
#Perform pooling. In this case, mean pooling
|
56 |
+
return mean_pooling(model_output, encoded_input['attention_mask'])
|
57 |
+
|
58 |
+
query_embeddings = compute_embeddings(queries)
|
59 |
+
passage_embeddings = compute_embeddings(passages)
|
60 |
+
```
|
61 |
+
|
62 |
+
## Usage (Sentence-Transformers)
|
63 |
+
Using this model becomes more convenient when you have [sentence-transformers](https://github.com/UKPLab/sentence-transformers) installed:
|
64 |
+
```
|
65 |
+
pip install -U sentence-transformers
|
66 |
+
```
|
67 |
+
|
68 |
+
Then you can use the model like this:
|
69 |
+
```python
|
70 |
+
from sentence_transformers import SentenceTransformer
|
71 |
+
model = SentenceTransformer('model_name')
|
72 |
+
|
73 |
+
# Queries we want embeddings for
|
74 |
+
queries = ['What is the capital of France?', 'How many people live in New York City?']
|
75 |
+
|
76 |
+
# Passages that provide answers
|
77 |
+
passages = ['Paris is the capital of France', 'New York City is the most populous city in the United States, with an estimated 8,336,817 people living in the city, according to U.S. Census estimates dating July 1, 2019']
|
78 |
+
|
79 |
+
query_embeddings = model.encode(queries)
|
80 |
+
passage_embeddings = model.encode(passages)
|
81 |
+
```
|
82 |
+
|
83 |
+
## Changes in v3
|
84 |
+
The models from v2 have been used for find for all training queries similar passages. An [MS MARCO Cross-Encoder](ce-msmarco.md) based on the electra-base-model has been then used to classify if these retrieved passages answer the question.
|
85 |
+
|
86 |
+
If they received a low score by the cross-encoder, we saved them as hard negatives: They got a high score from the bi-encoder, but a low-score from the (better) cross-encoder.
|
87 |
+
|
88 |
+
We then trained the v2 models with these new hard negatives.
|
89 |
+
|
90 |
+
## Citing & Authors
|
91 |
+
If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
|
92 |
+
```
|
93 |
+
@inproceedings{reimers-2019-sentence-bert,
|
94 |
+
title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
|
95 |
+
author = "Reimers, Nils and Gurevych, Iryna",
|
96 |
+
booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
|
97 |
+
month = "11",
|
98 |
+
year = "2019",
|
99 |
+
publisher = "Association for Computational Linguistics",
|
100 |
+
url = "http://arxiv.org/abs/1908.10084",
|
101 |
+
}
|
102 |
+
```
|
config.json
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "../output/distilbert-base-uncased-mined_hard_neg-mean-pooling-dot_prod-no_identifier-epoch10-batchsize75-2021-03-21_13-53-07/0_Transformer",
|
3 |
+
"activation": "gelu",
|
4 |
+
"architectures": [
|
5 |
+
"DistilBertModel"
|
6 |
+
],
|
7 |
+
"attention_dropout": 0.1,
|
8 |
+
"dim": 768,
|
9 |
+
"dropout": 0.1,
|
10 |
+
"hidden_dim": 3072,
|
11 |
+
"initializer_range": 0.02,
|
12 |
+
"max_position_embeddings": 512,
|
13 |
+
"model_type": "distilbert",
|
14 |
+
"n_heads": 12,
|
15 |
+
"n_layers": 6,
|
16 |
+
"pad_token_id": 0,
|
17 |
+
"qa_dropout": 0.1,
|
18 |
+
"seq_classif_dropout": 0.2,
|
19 |
+
"sinusoidal_pos_embds": false,
|
20 |
+
"tie_weights_": true,
|
21 |
+
"transformers_version": "4.4.1",
|
22 |
+
"vocab_size": 30522
|
23 |
+
}
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:0d2805041081f6c6e0bd3c903a6c2b5a23551ec8063b91bcee90d05478cdaa71
|
3 |
+
size 265491187
|
sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 512,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "special_tokens_map_file": null, "name_or_path": "../output/distilbert-base-uncased-mined_hard_neg-mean-pooling-dot_prod-no_identifier-epoch10-batchsize75-2021-03-21_13-53-07/0_Transformer", "do_basic_tokenize": true, "never_split": null}
|
vocab.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|