Muennighoff
commited on
Commit
•
c149683
1
Parent(s):
f14b829
Add SGPT-6.1B-weightedmean-msmarco-specb-bitfit
Browse files- 1_Pooling/config.json +9 -0
- README.md +89 -0
- added_tokens.json +1 -0
- config.json +42 -0
- config_sentence_transformers.json +7 -0
- merges.txt +0 -0
- modules.json +14 -0
- pytorch_model.bin +3 -0
- sentence_bert_config.json +4 -0
- special_tokens_map.json +1 -0
- tokenizer.json +0 -0
- tokenizer_config.json +1 -0
- vocab.json +0 -0
1_Pooling/config.json
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"word_embedding_dimension": 4096,
|
3 |
+
"pooling_mode_cls_token": false,
|
4 |
+
"pooling_mode_mean_tokens": false,
|
5 |
+
"pooling_mode_max_tokens": false,
|
6 |
+
"pooling_mode_mean_sqrt_len_tokens": false,
|
7 |
+
"pooling_mode_weightedmean_tokens": true,
|
8 |
+
"pooling_mode_lasttoken": false
|
9 |
+
}
|
README.md
ADDED
@@ -0,0 +1,89 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
pipeline_tag: sentence-similarity
|
3 |
+
tags:
|
4 |
+
- sentence-transformers
|
5 |
+
- feature-extraction
|
6 |
+
- sentence-similarity
|
7 |
+
---
|
8 |
+
|
9 |
+
# {MODEL_NAME}
|
10 |
+
|
11 |
+
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 4096 dimensional dense vector space and can be used for tasks like clustering or semantic search.
|
12 |
+
|
13 |
+
<!--- Describe your model here -->
|
14 |
+
|
15 |
+
## Usage (Sentence-Transformers)
|
16 |
+
|
17 |
+
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
|
18 |
+
|
19 |
+
```
|
20 |
+
pip install -U sentence-transformers
|
21 |
+
```
|
22 |
+
|
23 |
+
Then you can use the model like this:
|
24 |
+
|
25 |
+
```python
|
26 |
+
from sentence_transformers import SentenceTransformer
|
27 |
+
sentences = ["This is an example sentence", "Each sentence is converted"]
|
28 |
+
|
29 |
+
model = SentenceTransformer('{MODEL_NAME}')
|
30 |
+
embeddings = model.encode(sentences)
|
31 |
+
print(embeddings)
|
32 |
+
```
|
33 |
+
|
34 |
+
|
35 |
+
|
36 |
+
## Evaluation Results
|
37 |
+
|
38 |
+
<!--- Describe how your model was evaluated -->
|
39 |
+
|
40 |
+
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
|
41 |
+
|
42 |
+
|
43 |
+
## Training
|
44 |
+
The model was trained with the parameters:
|
45 |
+
|
46 |
+
**DataLoader**:
|
47 |
+
|
48 |
+
`torch.utils.data.dataloader.DataLoader` of length 249592 with parameters:
|
49 |
+
```
|
50 |
+
{'batch_size': 2, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
|
51 |
+
```
|
52 |
+
|
53 |
+
**Loss**:
|
54 |
+
|
55 |
+
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
|
56 |
+
```
|
57 |
+
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
|
58 |
+
```
|
59 |
+
|
60 |
+
Parameters of the fit()-Method:
|
61 |
+
```
|
62 |
+
{
|
63 |
+
"epochs": 10,
|
64 |
+
"evaluation_steps": 0,
|
65 |
+
"evaluator": "NoneType",
|
66 |
+
"max_grad_norm": 1,
|
67 |
+
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
|
68 |
+
"optimizer_params": {
|
69 |
+
"lr": 5e-05
|
70 |
+
},
|
71 |
+
"scheduler": "WarmupLinear",
|
72 |
+
"steps_per_epoch": null,
|
73 |
+
"warmup_steps": 1000,
|
74 |
+
"weight_decay": 0.01
|
75 |
+
}
|
76 |
+
```
|
77 |
+
|
78 |
+
|
79 |
+
## Full Model Architecture
|
80 |
+
```
|
81 |
+
SentenceTransformer(
|
82 |
+
(0): Transformer({'max_seq_length': 300, 'do_lower_case': False}) with Transformer model: GPTJModel
|
83 |
+
(1): Pooling({'word_embedding_dimension': 4096, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': True, 'pooling_mode_lasttoken': False})
|
84 |
+
)
|
85 |
+
```
|
86 |
+
|
87 |
+
## Citing & Authors
|
88 |
+
|
89 |
+
<!--- Describe where people can find more information -->
|
added_tokens.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"<|extratoken_20|>": 50276, "<|extratoken_109|>": 50365, "<|extratoken_133|>": 50389, "<|extratoken_135|>": 50391, "<|extratoken_2|>": 50258, "<|extratoken_90|>": 50346, "<|extratoken_52|>": 50308, "<|extratoken_96|>": 50352, "<|extratoken_60|>": 50316, "<|extratoken_54|>": 50310, "<|extratoken_122|>": 50378, "<|extratoken_56|>": 50312, "<|extratoken_97|>": 50353, "<|extratoken_112|>": 50368, "<|extratoken_64|>": 50320, "<|extratoken_75|>": 50331, "<|extratoken_108|>": 50364, "<|extratoken_107|>": 50363, "<|extratoken_13|>": 50269, "<|extratoken_116|>": 50372, "<|extratoken_78|>": 50334, "<|extratoken_39|>": 50295, "<|extratoken_22|>": 50278, "<|extratoken_124|>": 50380, "<|extratoken_66|>": 50322, "<|extratoken_114|>": 50370, "<|extratoken_42|>": 50298, "<|extratoken_79|>": 50335, "<|extratoken_127|>": 50383, "<|extratoken_69|>": 50325, "<|extratoken_3|>": 50259, "<|extratoken_83|>": 50339, "<|extratoken_24|>": 50280, "<|extratoken_120|>": 50376, "<|extratoken_53|>": 50309, "<|extratoken_55|>": 50311, "<|extratoken_19|>": 50275, "<|extratoken_93|>": 50349, "<|extratoken_88|>": 50344, "<|extratoken_131|>": 50387, "<|extratoken_33|>": 50289, "<|extratoken_65|>": 50321, "<|extratoken_59|>": 50315, "<|extratoken_123|>": 50379, "<|extratoken_125|>": 50381, "<|extratoken_46|>": 50302, "<|extratoken_82|>": 50338, "<|extratoken_139|>": 50395, "<|extratoken_26|>": 50282, "<|extratoken_49|>": 50305, "<|extratoken_12|>": 50268, "<|extratoken_38|>": 50294, "<|extratoken_36|>": 50292, "<|extratoken_103|>": 50359, "<|extratoken_86|>": 50342, "<|extratoken_18|>": 50274, "<|extratoken_95|>": 50351, "<|extratoken_21|>": 50277, "<|extratoken_23|>": 50279, "<|extratoken_141|>": 50397, "<|extratoken_143|>": 50399, "<|extratoken_99|>": 50355, "<|extratoken_132|>": 50388, "<|extratoken_84|>": 50340, "<|extratoken_32|>": 50288, "<|extratoken_134|>": 50390, "<|extratoken_62|>": 50318, "<|extratoken_40|>": 50296, "<|extratoken_91|>": 50347, "<|extratoken_110|>": 50366, "<|extratoken_4|>": 50260, "<|extratoken_81|>": 50337, "<|extratoken_136|>": 50392, "<|extratoken_101|>": 50357, "<|extratoken_29|>": 50285, "<|extratoken_94|>": 50350, "<|extratoken_70|>": 50326, "<|extratoken_16|>": 50272, "<|extratoken_87|>": 50343, "<|extratoken_115|>": 50371, "<|extratoken_77|>": 50333, "<|extratoken_15|>": 50271, "<|extratoken_89|>": 50345, "{SOS}": 50401, "<|extratoken_27|>": 50283, "<|extratoken_8|>": 50264, "<|extratoken_1|>": 50257, "<|extratoken_119|>": 50375, "<|extratoken_98|>": 50354, "<|extratoken_11|>": 50267, "<|extratoken_35|>": 50291, "<|extratoken_17|>": 50273, "<|extratoken_142|>": 50398, "<|extratoken_31|>": 50287, "<|extratoken_68|>": 50324, "<|extratoken_58|>": 50314, "<|extratoken_51|>": 50307, "<|extratoken_67|>": 50323, "<|extratoken_7|>": 50263, "<|extratoken_44|>": 50300, "<|extratoken_5|>": 50261, "<|extratoken_41|>": 50297, "<|extratoken_92|>": 50348, "<|extratoken_106|>": 50362, "<|extratoken_138|>": 50394, "<|extratoken_45|>": 50301, "<|extratoken_74|>": 50330, "<|extratoken_6|>": 50262, "<|extratoken_73|>": 50329, "<|extratoken_100|>": 50356, "<|extratoken_111|>": 50367, "<|extratoken_34|>": 50290, "<|extratoken_50|>": 50306, "<|extratoken_14|>": 50270, "<|extratoken_117|>": 50373, "<|extratoken_63|>": 50319, "<|extratoken_61|>": 50317, "<|extratoken_80|>": 50336, "[SOS]": 50400, "<|extratoken_121|>": 50377, "<|extratoken_118|>": 50374, "<|extratoken_126|>": 50382, "<|extratoken_9|>": 50265, "<|extratoken_10|>": 50266, "<|extratoken_137|>": 50393, "<|extratoken_37|>": 50293, "<|extratoken_102|>": 50358, "<|extratoken_76|>": 50332, "<|extratoken_85|>": 50341, "<|extratoken_25|>": 50281, "<|extratoken_105|>": 50361, "<|extratoken_130|>": 50386, "<|extratoken_47|>": 50303, "<|extratoken_57|>": 50313, "<|extratoken_71|>": 50327, "<|extratoken_30|>": 50286, "<|extratoken_28|>": 50284, "<|extratoken_43|>": 50299, "<|extratoken_140|>": 50396, "<|extratoken_72|>": 50328, "<|extratoken_104|>": 50360, "<|extratoken_113|>": 50369, "<|extratoken_48|>": 50304, "<|extratoken_129|>": 50385, "<|extratoken_128|>": 50384}
|
config.json
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"_name_or_path": "EleutherAI/gpt-j-6B",
|
3 |
+
"activation_function": "gelu_new",
|
4 |
+
"architectures": [
|
5 |
+
"GPTJModel"
|
6 |
+
],
|
7 |
+
"attn_pdrop": 0.0,
|
8 |
+
"bos_token_id": 50256,
|
9 |
+
"embd_pdrop": 0.0,
|
10 |
+
"eos_token_id": 50256,
|
11 |
+
"gradient_checkpointing": false,
|
12 |
+
"initializer_range": 0.02,
|
13 |
+
"layer_norm_epsilon": 1e-05,
|
14 |
+
"model_type": "gptj",
|
15 |
+
"n_ctx": 2048,
|
16 |
+
"n_embd": 4096,
|
17 |
+
"n_head": 16,
|
18 |
+
"n_inner": null,
|
19 |
+
"n_layer": 28,
|
20 |
+
"n_positions": 2048,
|
21 |
+
"resid_pdrop": 0.0,
|
22 |
+
"rotary": true,
|
23 |
+
"rotary_dim": 64,
|
24 |
+
"scale_attn_weights": true,
|
25 |
+
"summary_activation": null,
|
26 |
+
"summary_first_dropout": 0.1,
|
27 |
+
"summary_proj_to_labels": true,
|
28 |
+
"summary_type": "cls_index",
|
29 |
+
"summary_use_proj": true,
|
30 |
+
"task_specific_params": {
|
31 |
+
"text-generation": {
|
32 |
+
"do_sample": true,
|
33 |
+
"max_length": 50,
|
34 |
+
"temperature": 1.0
|
35 |
+
}
|
36 |
+
},
|
37 |
+
"tokenizer_class": "GPT2Tokenizer",
|
38 |
+
"torch_dtype": "float32",
|
39 |
+
"transformers_version": "4.11.3",
|
40 |
+
"use_cache": true,
|
41 |
+
"vocab_size": 50402
|
42 |
+
}
|
config_sentence_transformers.json
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"__version__": {
|
3 |
+
"sentence_transformers": "2.1.0",
|
4 |
+
"transformers": "4.11.3",
|
5 |
+
"pytorch": "1.10.1"
|
6 |
+
}
|
7 |
+
}
|
merges.txt
ADDED
The diff for this file is too large to render.
See raw diff
|
|
modules.json
ADDED
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[
|
2 |
+
{
|
3 |
+
"idx": 0,
|
4 |
+
"name": "0",
|
5 |
+
"path": "",
|
6 |
+
"type": "sentence_transformers.models.Transformer"
|
7 |
+
},
|
8 |
+
{
|
9 |
+
"idx": 1,
|
10 |
+
"name": "1",
|
11 |
+
"path": "1_Pooling",
|
12 |
+
"type": "sentence_transformers.models.Pooling"
|
13 |
+
}
|
14 |
+
]
|
pytorch_model.bin
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:43b76ced3d70a603a6f7d57f1fb5b6a7944554bf4a4efb6372d987321db338c0
|
3 |
+
size 23495172015
|
sentence_bert_config.json
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"max_seq_length": 300,
|
3 |
+
"do_lower_case": false
|
4 |
+
}
|
special_tokens_map.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true}, "pad_token": "<|endoftext|>"}
|
tokenizer.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
tokenizer_config.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"unk_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "bos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "eos_token": {"content": "<|endoftext|>", "single_word": false, "lstrip": false, "rstrip": false, "normalized": true, "__type": "AddedToken"}, "add_prefix_space": false, "errors": "replace", "model_max_length": 2048, "special_tokens_map_file": null, "name_or_path": "EleutherAI/gpt-j-6B", "tokenizer_class": "GPT2Tokenizer"}
|
vocab.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|