Commit
·
e521b80
1
Parent(s):
e592ce9
Add SGPT-125M-learntmean-nli
Browse files- 1_WeightedMeanPooling/config.json +5 -0
- 1_WeightedMeanPooling/pytorch_model.bin +3 -0
- README.md +89 -0
- config.json +54 -0
- config_sentence_transformers.json +7 -0
- eval/similarity_evaluation_sts-dev_results.csv +12 -0
- merges.txt +0 -0
- modules.json +14 -0
- pytorch_model.bin +3 -0
- sentence_bert_config.json +4 -0
- similarity_evaluation_sts-test_results.csv +2 -0
- special_tokens_map.json +1 -0
- tokenizer.json +0 -0
- tokenizer_config.json +1 -0
- vocab.json +0 -0
1_WeightedMeanPooling/config.json
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 768,
|
3 |
+
"position_start": 0,
|
4 |
+
"num_positions": 75
|
5 |
+
}
|
1_WeightedMeanPooling/pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:e8874c1ad2a51f3ace5cbe24c54f4e2b3bd3e804db92db92bc4cb08953e21a1a
|
3 |
+
size 1067
|
README.md
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: sentence-similarity
|
3 |
+
tags:
|
4 |
+
- sentence-transformers
|
5 |
+
- feature-extraction
|
6 |
+
- sentence-similarity
|
7 |
+
---
|
8 |
+
|
9 |
+
# {MODEL_NAME}
|
10 |
+
|
11 |
+
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a None dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
12 |
+
|
13 |
+
<!--- Describe your model here -->
|
14 |
+
|
15 |
+
## Usage (Sentence-Transformers)
|
16 |
+
|
17 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
18 |
+
|
19 |
+
```
|
20 |
+
pip install -U sentence-transformers
|
21 |
+
```
|
22 |
+
|
23 |
+
Then you can use the model like this:
|
24 |
+
|
25 |
+
```python
|
26 |
+
from sentence_transformers import SentenceTransformer
|
27 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
28 |
+
|
29 |
+
model = SentenceTransformer('{MODEL_NAME}')
|
30 |
+
embeddings = model.encode(sentences)
|
31 |
+
print(embeddings)
|
32 |
+
```
|
33 |
+
|
34 |
+
|
35 |
+
|
36 |
+
## Evaluation Results
|
37 |
+
|
38 |
+
<!--- Describe how your model was evaluated -->
|
39 |
+
|
40 |
+
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
|
41 |
+
|
42 |
+
|
43 |
+
## Training
|
44 |
+
The model was trained with the parameters:
|
45 |
+
|
46 |
+
**DataLoader**:
|
47 |
+
|
48 |
+
`sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader` of length 8807 with parameters:
|
49 |
+
```
|
50 |
+
{'batch_size': 64}
|
51 |
+
```
|
52 |
+
|
53 |
+
**Loss**:
|
54 |
+
|
55 |
+
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
|
56 |
+
```
|
57 |
+
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
|
58 |
+
```
|
59 |
+
|
60 |
+
Parameters of the fit()-Method:
|
61 |
+
```
|
62 |
+
{
|
63 |
+
"epochs": 1,
|
64 |
+
"evaluation_steps": 880,
|
65 |
+
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
|
66 |
+
"max_grad_norm": 1,
|
67 |
+
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
|
68 |
+
"optimizer_params": {
|
69 |
+
"lr": 2e-05
|
70 |
+
},
|
71 |
+
"scheduler": "WarmupLinear",
|
72 |
+
"steps_per_epoch": null,
|
73 |
+
"warmup_steps": 881,
|
74 |
+
"weight_decay": 0.01
|
75 |
+
}
|
76 |
+
```
|
77 |
+
|
78 |
+
|
79 |
+
## Full Model Architecture
|
80 |
+
```
|
81 |
+
SentenceTransformer(
|
82 |
+
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: GPTNeoModel
|
83 |
+
(1): WeightedMeanPooling()
|
84 |
+
)
|
85 |
+
```
|
86 |
+
|
87 |
+
## Citing & Authors
|
88 |
+
|
89 |
+
<!--- Describe where people can find more information -->
|
config.json
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "EleutherAI/gpt-neo-125M",
|
3 |
+
"activation_function": "gelu_new",
|
4 |
+
"architectures": [
|
5 |
+
"GPTNeoModel"
|
6 |
+
],
|
7 |
+
"attention_dropout": 0,
|
8 |
+
"attention_layers": [
|
9 |
+
"global",
|
10 |
+
"local",
|
11 |
+
"global",
|
12 |
+
"local",
|
13 |
+
"global",
|
14 |
+
"local",
|
15 |
+
"global",
|
16 |
+
"local",
|
17 |
+
"global",
|
18 |
+
"local",
|
19 |
+
"global",
|
20 |
+
"local"
|
21 |
+
],
|
22 |
+
"attention_types": [
|
23 |
+
[
|
24 |
+
[
|
25 |
+
"global",
|
26 |
+
"local"
|
27 |
+
],
|
28 |
+
6
|
29 |
+
]
|
30 |
+
],
|
31 |
+
"bos_token_id": 50256,
|
32 |
+
"embed_dropout": 0,
|
33 |
+
"eos_token_id": 50256,
|
34 |
+
"gradient_checkpointing": false,
|
35 |
+
"hidden_size": 768,
|
36 |
+
"initializer_range": 0.02,
|
37 |
+
"intermediate_size": null,
|
38 |
+
"layer_norm_epsilon": 1e-05,
|
39 |
+
"max_position_embeddings": 2048,
|
40 |
+
"model_type": "gpt_neo",
|
41 |
+
"num_heads": 12,
|
42 |
+
"num_layers": 12,
|
43 |
+
"resid_dropout": 0,
|
44 |
+
"summary_activation": null,
|
45 |
+
"summary_first_dropout": 0.1,
|
46 |
+
"summary_proj_to_labels": true,
|
47 |
+
"summary_type": "cls_index",
|
48 |
+
"summary_use_proj": true,
|
49 |
+
"torch_dtype": "float32",
|
50 |
+
"transformers_version": "4.11.3",
|
51 |
+
"use_cache": true,
|
52 |
+
"vocab_size": 50257,
|
53 |
+
"window_size": 256
|
54 |
+
}
|
config_sentence_transformers.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "2.1.0",
|
4 |
+
"transformers": "4.11.3",
|
5 |
+
"pytorch": "1.10.1"
|
6 |
+
}
|
7 |
+
}
|
eval/similarity_evaluation_sts-dev_results.csv
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
|
2 |
+
0,880,0.7874759629669823,0.7912446830570768,0.7903261342269166,0.7937779499298665,0.793451912598859,0.7971659469298812,0.6047788112816523,0.6507645985657873
|
3 |
+
0,1760,0.8131803078100803,0.8217217521323321,0.8078740093837247,0.8136387056288685,0.80783684036113,0.8145862845746178,0.6442698777398019,0.6769324689376811
|
4 |
+
0,2640,0.8205188607650769,0.8295576756894527,0.809619185876313,0.8164881399796352,0.8087770249430669,0.8164148498934699,0.6532800448682675,0.6845039439137265
|
5 |
+
0,3520,0.8194023636022392,0.8291942305987777,0.8026114977298271,0.8097468417330873,0.8014032761561445,0.8094529536299162,0.6612855098370835,0.6945688013933999
|
6 |
+
0,4400,0.8278999458091857,0.8370930561889419,0.8111320038675869,0.8180086766161624,0.808879055914931,0.8170175211278848,0.6760937280200618,0.7015204574363635
|
7 |
+
0,5280,0.8281381038642194,0.8369509138876907,0.8051756334816669,0.8124593400462924,0.8028705531174375,0.8114328802631885,0.6819638474225186,0.708109243122145
|
8 |
+
0,6160,0.8282860490892665,0.8380505641127632,0.8056595930718852,0.8127151391552065,0.8030974392778729,0.8112899442214865,0.6864753267136721,0.7133431150614978
|
9 |
+
0,7040,0.8299474027656404,0.8385512601878364,0.8076509282307714,0.8144305140044854,0.8052765414613676,0.8130008421993484,0.6843023158951336,0.7116509270609606
|
10 |
+
0,7920,0.83068475282831,0.83940890490782,0.8068064968525007,0.8135340881189486,0.8038602213322118,0.8116928430760436,0.6914461462348803,0.7144560233028373
|
11 |
+
0,8800,0.8299367319536496,0.8391378193114882,0.8059099147634619,0.813011324412988,0.8027336665562144,0.8109457269020878,0.6892360704449959,0.7127091822396724
|
12 |
+
0,-1,0.8299413185956964,0.8391453371102706,0.8058840017990108,0.8130046091058626,0.8027042937543123,0.8109357678204817,0.6892798410433907,0.7127213797569869
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
modules.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_WeightedMeanPooling",
|
12 |
+
"type": "sentence_transformers.models.WeightedMeanPooling"
|
13 |
+
}
|
14 |
+
]
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:261c582ec054f7c7a021c7d427ac0a33418cf73e27ec9f11456a8c649b3955f2
|
3 |
+
size 551190545
|
sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 75,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
similarity_evaluation_sts-test_results.csv
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
epoch,steps,cosine_pearson,cosine_spearman,euclidean_pearson,euclidean_spearman,manhattan_pearson,manhattan_spearman,dot_pearson,dot_spearman
|
2 |
+
-1,-1,0.791277841042531,0.8008451549718872,0.7713262200294072,0.7736824259656975,0.7696089299758434,0.7737870864645463,0.58573209645255,0.5838254039153596
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": "<|endoftext|>"}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "errors": "replace", "model_max_length": 2048, "special_tokens_map_file": null, "name_or_path": "EleutherAI/gpt-neo-125M", "tokenizer_class": "GPT2Tokenizer"}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|