gustavon2vec commited on
Commit
6acdf78
1 Parent(s): 8632002

upload files from original repo

Browse files
1_Pooling/1_Pooling_config.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "word_embedding_dimension": 384,
3
+ "pooling_mode_cls_token": false,
4
+ "pooling_mode_mean_tokens": true,
5
+ "pooling_mode_max_tokens": false,
6
+ "pooling_mode_mean_sqrt_len_tokens": false
7
+ }
README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ pipeline_tag: sentence-similarity
5
+ tags:
6
+ - sentence-transformers
7
+ - feature-extraction
8
+ - sentence-similarity
9
+ - transformers
10
+ ---
11
+
12
+ # msmarco-MiniLM-L6-cos-v5
13
+ This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and was designed for **semantic search**. It has been trained on 500k (query, answer) pairs from the [MS MARCO Passages dataset](https://github.com/microsoft/MSMARCO-Passage-Ranking). For an introduction to semantic search, have a look at: [SBERT.net - Semantic Search](https://www.sbert.net/examples/applications/semantic-search/README.html)
14
+
15
+
16
+ ## Usage (Sentence-Transformers)
17
+ Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
18
+
19
+ ```
20
+ pip install -U sentence-transformers
21
+ ```
22
+
23
+ Then you can use the model like this:
24
+ ```python
25
+ from sentence_transformers import SentenceTransformer, util
26
+
27
+ query = "How many people live in London?"
28
+ docs = ["Around 9 Million people live in London", "London is known for its financial district"]
29
+
30
+ #Load the model
31
+ model = SentenceTransformer('sentence-transformers/msmarco-MiniLM-L6-cos-v5')
32
+
33
+ #Encode query and documents
34
+ query_emb = model.encode(query)
35
+ doc_emb = model.encode(docs)
36
+
37
+ #Compute dot score between query and all document embeddings
38
+ scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()
39
+
40
+ #Combine docs & scores
41
+ doc_score_pairs = list(zip(docs, scores))
42
+
43
+ #Sort by decreasing score
44
+ doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
45
+
46
+ #Output passages & scores
47
+ for doc, score in doc_score_pairs:
48
+ print(score, doc)
49
+ ```
50
+
51
+
52
+ ## Usage (HuggingFace Transformers)
53
+ Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.
54
+
55
+ ```python
56
+ from transformers import AutoTokenizer, AutoModel
57
+ import torch
58
+ import torch.nn.functional as F
59
+
60
+ #Mean Pooling - Take average of all tokens
61
+ def mean_pooling(model_output, attention_mask):
62
+ token_embeddings = model_output.last_hidden_state #First element of model_output contains all token embeddings
63
+ input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
64
+ return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
65
+
66
+
67
+ #Encode text
68
+ def encode(texts):
69
+ # Tokenize sentences
70
+ encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')
71
+
72
+ # Compute token embeddings
73
+ with torch.no_grad():
74
+ model_output = model(**encoded_input, return_dict=True)
75
+
76
+ # Perform pooling
77
+ embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
78
+
79
+ # Normalize embeddings
80
+ embeddings = F.normalize(embeddings, p=2, dim=1)
81
+
82
+ return embeddings
83
+
84
+
85
+ # Sentences we want sentence embeddings for
86
+ query = "How many people live in London?"
87
+ docs = ["Around 9 Million people live in London", "London is known for its financial district"]
88
+
89
+ # Load model from HuggingFace Hub
90
+ tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5")
91
+ model = AutoModel.from_pretrained("sentence-transformers/msmarco-MiniLM-L6-cos-v5")
92
+
93
+ #Encode query and docs
94
+ query_emb = encode(query)
95
+ doc_emb = encode(docs)
96
+
97
+ #Compute dot score between query and all document embeddings
98
+ scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()
99
+
100
+ #Combine docs & scores
101
+ doc_score_pairs = list(zip(docs, scores))
102
+
103
+ #Sort by decreasing score
104
+ doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
105
+
106
+ #Output passages & scores
107
+ for doc, score in doc_score_pairs:
108
+ print(score, doc)
109
+ ```
110
+
111
+ ## Technical Details
112
+
113
+ In the following some technical details how this model must be used:
114
+
115
+ | Setting | Value |
116
+ | --- | :---: |
117
+ | Dimensions | 384 |
118
+ | Produces normalized embeddings | Yes |
119
+ | Pooling-Method | Mean pooling |
120
+ | Suitable score functions | dot-product (`util.dot_score`), cosine-similarity (`util.cos_sim`), or euclidean distance |
121
+
122
+ Note: When loaded with `sentence-transformers`, this model produces normalized embeddings with length 1. In that case, dot-product and cosine-similarity are equivalent. dot-product is preferred as it is faster. Euclidean distance is proportional to dot-product and can also be used.
123
+
124
+ ## Citing & Authors
125
+
126
+ This model was trained by [sentence-transformers](https://www.sbert.net/).
127
+
128
+ If you find this model helpful, feel free to cite our publication [Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks](https://arxiv.org/abs/1908.10084):
129
+ ```bibtex
130
+ @inproceedings{reimers-2019-sentence-bert,
131
+ title = "Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks",
132
+ author = "Reimers, Nils and Gurevych, Iryna",
133
+ booktitle = "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing",
134
+ month = "11",
135
+ year = "2019",
136
+ publisher = "Association for Computational Linguistics",
137
+ url = "http://arxiv.org/abs/1908.10084",
138
+ }
139
+ ```
config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "old_models/msmarco-MiniLM-L-6-v3/0_Transformer",
3
+ "architectures": [
4
+ "BertModel"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "gradient_checkpointing": false,
8
+ "hidden_act": "gelu",
9
+ "hidden_dropout_prob": 0.1,
10
+ "hidden_size": 384,
11
+ "initializer_range": 0.02,
12
+ "intermediate_size": 1536,
13
+ "layer_norm_eps": 1e-12,
14
+ "max_position_embeddings": 512,
15
+ "model_type": "bert",
16
+ "num_attention_heads": 12,
17
+ "num_hidden_layers": 6,
18
+ "pad_token_id": 0,
19
+ "position_embedding_type": "absolute",
20
+ "transformers_version": "4.7.0",
21
+ "type_vocab_size": 2,
22
+ "use_cache": true,
23
+ "vocab_size": 30522
24
+ }
config_sentence_transformers.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "sentence_transformers": "2.0.0",
4
+ "transformers": "4.7.0",
5
+ "pytorch": "1.9.0+cu102"
6
+ }
7
+ }
flax_model.msgpack ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ce670108f3abe4a1513aa973c696e273397db899a983deecdd86a987f9a63dc
3
+ size 90856603
gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
modules.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.models.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_Pooling",
12
+ "type": "sentence_transformers.models.Pooling"
13
+ },
14
+ {
15
+ "idx": 2,
16
+ "name": "2",
17
+ "path": "2_Normalize",
18
+ "type": "sentence_transformers.models.Normalize"
19
+ }
20
+ ]
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5e3a29b2fc7bce0f6b0bdd35dcd6e6d1c1dd5fc191561d0b9c5d3aadf3891e0b
3
+ size 90895153
sentence_bert_config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "max_seq_length": 384,
3
+ "do_lower_case": false
4
+ }
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
tf_model.h5 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5f4f194a925d21f972ae6149d1f9a94dae43886a3fff1886977a950c6862fcd9
3
+ size 91005696
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "model_max_length": 512, "name_or_path": "old_models/msmarco-MiniLM-L-6-v3/0_Transformer", "do_basic_tokenize": true, "never_split": null, "special_tokens_map_file": "old_models/msmarco-MiniLM-L-6-v3/0_Transformer/special_tokens_map.json"}
vocab.txt ADDED
The diff for this file is too large to render. See raw diff