LoupGarou commited on
Commit
263fa23
1 Parent(s): f723dbc

Upload 10 files

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ datasets:
5
+ - guanaco
6
+ model_hub_library:
7
+ - transformers
8
+ license:
9
+ - apache-2.0
10
+ ---
11
+
12
+ ## Starcoderplus-Guanaco-GPT4-15B-V1.0 Model Card
13
+ Starcoderplus-Guanaco-GPT4-15B-V1.0 is a language model that combines the strengths of the [Starcoderplus](https://huggingface.co/bigcode/starcoderplus) base model, an expansion of the orginal [openassistant-guanaco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) dataset re-imagined using 100% GPT-4 answers, and additional data on abstract algebra and physics for finetuning. The original openassistant-guanaco dataset questions were trimmed to within 2 standard deviations of token size for input and output pairs and all non-english data was been removed to reduce training size requirements.
14
+
15
+ # Model Description
16
+ This model is built on top of the Starcoderplus base model, a large language model which is a fine-tuned version of [StarCoderBase](https://huggingface.co/bigcode/starcoderbase). The Starcoderplus base model was further finetuned using QLORA on the revised openassistant-guanaco dataset questions that were 100% re-imagined using GPT-4.
17
+
18
+ # Intended Use
19
+ This model is designed to be used for a wide array of text generation tasks that require understanding and generating English text. The model is expected to perform well in tasks such as answering questions, writing essays, summarizing text, translation, and more. However, given the specific data processing and finetuning done, it might be particularly effective for tasks related to English language question-answering systems.
20
+
21
+ # Limitations
22
+ Despite the powerful capabilities of this model, users should be aware of its limitations. The model's knowledge is up to date only until the time it was trained, and it doesn't know about events in the world after that. It can sometimes produce incorrect or nonsensical responses, as it doesn't understand the text in the same way humans do. It should be used as a tool to assist in generating text and not as a sole source of truth.
23
+
24
+ # How to use
25
+ Here is an example of how to use this model:
26
+
27
+ ```python
28
+ from transformers import AutoModelForCausalLM, AutoTokenizer
29
+ import time
30
+ import torch
31
+
32
+ class Chatbot:
33
+ def __init__(self, model_name):
34
+ self.tokenizer = AutoTokenizer.from_pretrained(model_name, padding_side='left')
35
+ self.model = AutoModelForCausalLM.from_pretrained(model_name, load_in_4bit=True, torch_dtype=torch.bfloat16)
36
+ if self.tokenizer.pad_token_id is None:
37
+ self.tokenizer.pad_token_id = self.tokenizer.eos_token_id
38
+
39
+ def get_response(self, prompt):
40
+ inputs = self.tokenizer.encode_plus(prompt, return_tensors="pt", padding='max_length', max_length=100)
41
+ if next(self.model.parameters()).is_cuda:
42
+ inputs = {name: tensor.to('cuda') for name, tensor in inputs.items()}
43
+ start_time = time.time()
44
+ tokens = self.model.generate(input_ids=inputs['input_ids'],
45
+ attention_mask=inputs['attention_mask'],
46
+ pad_token_id=self.tokenizer.pad_token_id,
47
+ max_new_tokens=400)
48
+ end_time = time.time()
49
+ output_tokens = tokens[0][inputs['input_ids'].shape[-1]:]
50
+ output = self.tokenizer.decode(output_tokens, skip_special_tokens=True)
51
+ time_taken = end_time - start_time
52
+ return output, time_taken
53
+
54
+ def main():
55
+ chatbot = Chatbot("LoupGarou/Starcoderplus-Guanaco-GPT4-15B-V1.0")
56
+ while True:
57
+ user_input = input("Enter your prompt: ")
58
+ if user_input.lower() == 'quit':
59
+ break
60
+ output, time_taken = chatbot.get_response(user_input)
61
+ print("\033[33m" + output + "\033[0m")
62
+ print("Time taken to process: ", time_taken, "seconds")
63
+ print("Exited the program.")
64
+
65
+ if __name__ == "__main__":
66
+ main()
67
+
68
+ ```
69
+
70
+ # Training Procedure
71
+ The base Starcoderplus model was finetuned on the modified openassistant-guanaco dataset 100% re-imagined with GPT4 answers using QLORA. All non-English data was also removed from this finetuning dataset to reduce trainign size and time.
72
+
73
+ ## Acknowledgements
74
+ This model, Starcoderplus-Guanaco-GPT4-15B-V1.0, builds upon the strengths of the [Starcoderplus](https://huggingface.co/bigcode/starcoderplus) and the [openassistant-guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco).
75
+
76
+ A sincere appreciation goes out to the developers and the community involved in the creation and refinement of these models. Their commitment to providing open source tools and datasets have been instrumental in making this project a reality.
77
+
78
+ Moreover, a special note of thanks to the [Hugging Face](https://huggingface.co/) team, whose transformative library has not only streamlined the process of model creation and adaptation, but also democratized the access to state-of-the-art machine learning technologies. Their impact on the development of this project cannot be overstated.
config.json ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/fsx/bigcode/experiments/pretraining/conversions/starcoderplus/large-model",
3
+ "activation_function": "gelu",
4
+ "architectures": [
5
+ "GPTBigCodeForCausalLM"
6
+ ],
7
+ "attention_softmax_in_fp32": true,
8
+ "multi_query": true,
9
+ "attn_pdrop": 0.1,
10
+ "bos_token_id": 0,
11
+ "embd_pdrop": 0.1,
12
+ "eos_token_id": 0,
13
+ "inference_runner": 0,
14
+ "initializer_range": 0.02,
15
+ "layer_norm_epsilon": 1e-05,
16
+ "max_batch_size": null,
17
+ "max_sequence_length": null,
18
+ "model_type": "gpt_bigcode",
19
+ "n_embd": 6144,
20
+ "n_head": 48,
21
+ "n_inner": 24576,
22
+ "n_layer": 40,
23
+ "n_positions": 8192,
24
+ "pad_key_length": true,
25
+ "pre_allocate_kv_cache": false,
26
+ "resid_pdrop": 0.1,
27
+ "scale_attention_softmax_in_fp32": true,
28
+ "scale_attn_weights": true,
29
+ "summary_activation": null,
30
+ "summary_first_dropout": 0.1,
31
+ "summary_proj_to_labels": true,
32
+ "summary_type": "cls_index",
33
+ "summary_use_proj": true,
34
+ "torch_dtype": "float32",
35
+ "transformers_version": "4.28.1",
36
+ "use_cache": true,
37
+ "validate_runner_input": true,
38
+ "vocab_size": 49152
39
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.27.0.dev0"
6
+ }
gitattributes.txt ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
pytorch_model.bin.index.json ADDED
@@ -0,0 +1,492 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "metadata": {
3
+ "total_size": 63277785088
4
+ },
5
+ "weight_map": {
6
+ "lm_head.weight": "pytorch_model.bin",
7
+ "transformer.h.0.attn.c_attn.bias": "pytorch_model.bin",
8
+ "transformer.h.0.attn.c_attn.weight": "pytorch_model.bin",
9
+ "transformer.h.0.attn.c_proj.bias": "pytorch_model.bin",
10
+ "transformer.h.0.attn.c_proj.weight": "pytorch_model.bin",
11
+ "transformer.h.0.ln_1.bias": "pytorch_model.bin",
12
+ "transformer.h.0.ln_1.weight": "pytorch_model.bin",
13
+ "transformer.h.0.ln_2.bias": "pytorch_model.bin",
14
+ "transformer.h.0.ln_2.weight": "pytorch_model.bin",
15
+ "transformer.h.0.mlp.c_fc.bias": "pytorch_model.bin",
16
+ "transformer.h.0.mlp.c_fc.weight": "pytorch_model.bin",
17
+ "transformer.h.0.mlp.c_proj.bias": "pytorch_model.bin",
18
+ "transformer.h.0.mlp.c_proj.weight": "pytorch_model.bin",
19
+ "transformer.h.1.attn.c_attn.bias": "pytorch_model.bin",
20
+ "transformer.h.1.attn.c_attn.weight": "pytorch_model.bin",
21
+ "transformer.h.1.attn.c_proj.bias": "pytorch_model.bin",
22
+ "transformer.h.1.attn.c_proj.weight": "pytorch_model.bin",
23
+ "transformer.h.1.ln_1.bias": "pytorch_model.bin",
24
+ "transformer.h.1.ln_1.weight": "pytorch_model.bin",
25
+ "transformer.h.1.ln_2.bias": "pytorch_model.bin",
26
+ "transformer.h.1.ln_2.weight": "pytorch_model.bin",
27
+ "transformer.h.1.mlp.c_fc.bias": "pytorch_model.bin",
28
+ "transformer.h.1.mlp.c_fc.weight": "pytorch_model.bin",
29
+ "transformer.h.1.mlp.c_proj.bias": "pytorch_model.bin",
30
+ "transformer.h.1.mlp.c_proj.weight": "pytorch_model.bin",
31
+ "transformer.h.10.attn.c_attn.bias": "pytorch_model.bin",
32
+ "transformer.h.10.attn.c_attn.weight": "pytorch_model.bin",
33
+ "transformer.h.10.attn.c_proj.bias": "pytorch_model.bin",
34
+ "transformer.h.10.attn.c_proj.weight": "pytorch_model.bin",
35
+ "transformer.h.10.ln_1.bias": "pytorch_model.bin",
36
+ "transformer.h.10.ln_1.weight": "pytorch_model.bin",
37
+ "transformer.h.10.ln_2.bias": "pytorch_model.bin",
38
+ "transformer.h.10.ln_2.weight": "pytorch_model.bin",
39
+ "transformer.h.10.mlp.c_fc.bias": "pytorch_model.bin",
40
+ "transformer.h.10.mlp.c_fc.weight": "pytorch_model.bin",
41
+ "transformer.h.10.mlp.c_proj.bias": "pytorch_model.bin",
42
+ "transformer.h.10.mlp.c_proj.weight": "pytorch_model.bin",
43
+ "transformer.h.11.attn.c_attn.bias": "pytorch_model.bin",
44
+ "transformer.h.11.attn.c_attn.weight": "pytorch_model.bin",
45
+ "transformer.h.11.attn.c_proj.bias": "pytorch_model.bin",
46
+ "transformer.h.11.attn.c_proj.weight": "pytorch_model.bin",
47
+ "transformer.h.11.ln_1.bias": "pytorch_model.bin",
48
+ "transformer.h.11.ln_1.weight": "pytorch_model.bin",
49
+ "transformer.h.11.ln_2.bias": "pytorch_model.bin",
50
+ "transformer.h.11.ln_2.weight": "pytorch_model.bin",
51
+ "transformer.h.11.mlp.c_fc.bias": "pytorch_model.bin",
52
+ "transformer.h.11.mlp.c_fc.weight": "pytorch_model.bin",
53
+ "transformer.h.11.mlp.c_proj.bias": "pytorch_model.bin",
54
+ "transformer.h.11.mlp.c_proj.weight": "pytorch_model.bin",
55
+ "transformer.h.12.attn.c_attn.bias": "pytorch_model.bin",
56
+ "transformer.h.12.attn.c_attn.weight": "pytorch_model.bin",
57
+ "transformer.h.12.attn.c_proj.bias": "pytorch_model.bin",
58
+ "transformer.h.12.attn.c_proj.weight": "pytorch_model.bin",
59
+ "transformer.h.12.ln_1.bias": "pytorch_model.bin",
60
+ "transformer.h.12.ln_1.weight": "pytorch_model.bin",
61
+ "transformer.h.12.ln_2.bias": "pytorch_model.bin",
62
+ "transformer.h.12.ln_2.weight": "pytorch_model.bin",
63
+ "transformer.h.12.mlp.c_fc.bias": "pytorch_model.bin",
64
+ "transformer.h.12.mlp.c_fc.weight": "pytorch_model.bin",
65
+ "transformer.h.12.mlp.c_proj.bias": "pytorch_model.bin",
66
+ "transformer.h.12.mlp.c_proj.weight": "pytorch_model.bin",
67
+ "transformer.h.13.attn.c_attn.bias": "pytorch_model.bin",
68
+ "transformer.h.13.attn.c_attn.weight": "pytorch_model.bin",
69
+ "transformer.h.13.attn.c_proj.bias": "pytorch_model.bin",
70
+ "transformer.h.13.attn.c_proj.weight": "pytorch_model.bin",
71
+ "transformer.h.13.ln_1.bias": "pytorch_model.bin",
72
+ "transformer.h.13.ln_1.weight": "pytorch_model.bin",
73
+ "transformer.h.13.ln_2.bias": "pytorch_model.bin",
74
+ "transformer.h.13.ln_2.weight": "pytorch_model.bin",
75
+ "transformer.h.13.mlp.c_fc.bias": "pytorch_model.bin",
76
+ "transformer.h.13.mlp.c_fc.weight": "pytorch_model.bin",
77
+ "transformer.h.13.mlp.c_proj.bias": "pytorch_model.bin",
78
+ "transformer.h.13.mlp.c_proj.weight": "pytorch_model.bin",
79
+ "transformer.h.14.attn.c_attn.bias": "pytorch_model.bin",
80
+ "transformer.h.14.attn.c_attn.weight": "pytorch_model.bin",
81
+ "transformer.h.14.attn.c_proj.bias": "pytorch_model.bin",
82
+ "transformer.h.14.attn.c_proj.weight": "pytorch_model.bin",
83
+ "transformer.h.14.ln_1.bias": "pytorch_model.bin",
84
+ "transformer.h.14.ln_1.weight": "pytorch_model.bin",
85
+ "transformer.h.14.ln_2.bias": "pytorch_model.bin",
86
+ "transformer.h.14.ln_2.weight": "pytorch_model.bin",
87
+ "transformer.h.14.mlp.c_fc.bias": "pytorch_model.bin",
88
+ "transformer.h.14.mlp.c_fc.weight": "pytorch_model.bin",
89
+ "transformer.h.14.mlp.c_proj.bias": "pytorch_model.bin",
90
+ "transformer.h.14.mlp.c_proj.weight": "pytorch_model.bin",
91
+ "transformer.h.15.attn.c_attn.bias": "pytorch_model.bin",
92
+ "transformer.h.15.attn.c_attn.weight": "pytorch_model.bin",
93
+ "transformer.h.15.attn.c_proj.bias": "pytorch_model.bin",
94
+ "transformer.h.15.attn.c_proj.weight": "pytorch_model.bin",
95
+ "transformer.h.15.ln_1.bias": "pytorch_model.bin",
96
+ "transformer.h.15.ln_1.weight": "pytorch_model.bin",
97
+ "transformer.h.15.ln_2.bias": "pytorch_model.bin",
98
+ "transformer.h.15.ln_2.weight": "pytorch_model.bin",
99
+ "transformer.h.15.mlp.c_fc.bias": "pytorch_model.bin",
100
+ "transformer.h.15.mlp.c_fc.weight": "pytorch_model.bin",
101
+ "transformer.h.15.mlp.c_proj.bias": "pytorch_model.bin",
102
+ "transformer.h.15.mlp.c_proj.weight": "pytorch_model.bin",
103
+ "transformer.h.16.attn.c_attn.bias": "pytorch_model.bin",
104
+ "transformer.h.16.attn.c_attn.weight": "pytorch_model.bin",
105
+ "transformer.h.16.attn.c_proj.bias": "pytorch_model.bin",
106
+ "transformer.h.16.attn.c_proj.weight": "pytorch_model.bin",
107
+ "transformer.h.16.ln_1.bias": "pytorch_model.bin",
108
+ "transformer.h.16.ln_1.weight": "pytorch_model.bin",
109
+ "transformer.h.16.ln_2.bias": "pytorch_model.bin",
110
+ "transformer.h.16.ln_2.weight": "pytorch_model.bin",
111
+ "transformer.h.16.mlp.c_fc.bias": "pytorch_model.bin",
112
+ "transformer.h.16.mlp.c_fc.weight": "pytorch_model.bin",
113
+ "transformer.h.16.mlp.c_proj.bias": "pytorch_model.bin",
114
+ "transformer.h.16.mlp.c_proj.weight": "pytorch_model.bin",
115
+ "transformer.h.17.attn.c_attn.bias": "pytorch_model.bin",
116
+ "transformer.h.17.attn.c_attn.weight": "pytorch_model.bin",
117
+ "transformer.h.17.attn.c_proj.bias": "pytorch_model.bin",
118
+ "transformer.h.17.attn.c_proj.weight": "pytorch_model.bin",
119
+ "transformer.h.17.ln_1.bias": "pytorch_model.bin",
120
+ "transformer.h.17.ln_1.weight": "pytorch_model.bin",
121
+ "transformer.h.17.ln_2.bias": "pytorch_model.bin",
122
+ "transformer.h.17.ln_2.weight": "pytorch_model.bin",
123
+ "transformer.h.17.mlp.c_fc.bias": "pytorch_model.bin",
124
+ "transformer.h.17.mlp.c_fc.weight": "pytorch_model.bin",
125
+ "transformer.h.17.mlp.c_proj.bias": "pytorch_model.bin",
126
+ "transformer.h.17.mlp.c_proj.weight": "pytorch_model.bin",
127
+ "transformer.h.18.attn.c_attn.bias": "pytorch_model.bin",
128
+ "transformer.h.18.attn.c_attn.weight": "pytorch_model.bin",
129
+ "transformer.h.18.attn.c_proj.bias": "pytorch_model.bin",
130
+ "transformer.h.18.attn.c_proj.weight": "pytorch_model.bin",
131
+ "transformer.h.18.ln_1.bias": "pytorch_model.bin",
132
+ "transformer.h.18.ln_1.weight": "pytorch_model.bin",
133
+ "transformer.h.18.ln_2.bias": "pytorch_model.bin",
134
+ "transformer.h.18.ln_2.weight": "pytorch_model.bin",
135
+ "transformer.h.18.mlp.c_fc.bias": "pytorch_model.bin",
136
+ "transformer.h.18.mlp.c_fc.weight": "pytorch_model.bin",
137
+ "transformer.h.18.mlp.c_proj.bias": "pytorch_model.bin",
138
+ "transformer.h.18.mlp.c_proj.weight": "pytorch_model.bin",
139
+ "transformer.h.19.attn.c_attn.bias": "pytorch_model.bin",
140
+ "transformer.h.19.attn.c_attn.weight": "pytorch_model.bin",
141
+ "transformer.h.19.attn.c_proj.bias": "pytorch_model.bin",
142
+ "transformer.h.19.attn.c_proj.weight": "pytorch_model.bin",
143
+ "transformer.h.19.ln_1.bias": "pytorch_model.bin",
144
+ "transformer.h.19.ln_1.weight": "pytorch_model.bin",
145
+ "transformer.h.19.ln_2.bias": "pytorch_model.bin",
146
+ "transformer.h.19.ln_2.weight": "pytorch_model.bin",
147
+ "transformer.h.19.mlp.c_fc.bias": "pytorch_model.bin",
148
+ "transformer.h.19.mlp.c_fc.weight": "pytorch_model.bin",
149
+ "transformer.h.19.mlp.c_proj.bias": "pytorch_model.bin",
150
+ "transformer.h.19.mlp.c_proj.weight": "pytorch_model.bin",
151
+ "transformer.h.2.attn.c_attn.bias": "pytorch_model.bin",
152
+ "transformer.h.2.attn.c_attn.weight": "pytorch_model.bin",
153
+ "transformer.h.2.attn.c_proj.bias": "pytorch_model.bin",
154
+ "transformer.h.2.attn.c_proj.weight": "pytorch_model.bin",
155
+ "transformer.h.2.ln_1.bias": "pytorch_model.bin",
156
+ "transformer.h.2.ln_1.weight": "pytorch_model.bin",
157
+ "transformer.h.2.ln_2.bias": "pytorch_model.bin",
158
+ "transformer.h.2.ln_2.weight": "pytorch_model.bin",
159
+ "transformer.h.2.mlp.c_fc.bias": "pytorch_model.bin",
160
+ "transformer.h.2.mlp.c_fc.weight": "pytorch_model.bin",
161
+ "transformer.h.2.mlp.c_proj.bias": "pytorch_model.bin",
162
+ "transformer.h.2.mlp.c_proj.weight": "pytorch_model.bin",
163
+ "transformer.h.20.attn.c_attn.bias": "pytorch_model.bin",
164
+ "transformer.h.20.attn.c_attn.weight": "pytorch_model.bin",
165
+ "transformer.h.20.attn.c_proj.bias": "pytorch_model.bin",
166
+ "transformer.h.20.attn.c_proj.weight": "pytorch_model.bin",
167
+ "transformer.h.20.ln_1.bias": "pytorch_model.bin",
168
+ "transformer.h.20.ln_1.weight": "pytorch_model.bin",
169
+ "transformer.h.20.ln_2.bias": "pytorch_model.bin",
170
+ "transformer.h.20.ln_2.weight": "pytorch_model.bin",
171
+ "transformer.h.20.mlp.c_fc.bias": "pytorch_model.bin",
172
+ "transformer.h.20.mlp.c_fc.weight": "pytorch_model.bin",
173
+ "transformer.h.20.mlp.c_proj.bias": "pytorch_model.bin",
174
+ "transformer.h.20.mlp.c_proj.weight": "pytorch_model.bin",
175
+ "transformer.h.21.attn.c_attn.bias": "pytorch_model.bin",
176
+ "transformer.h.21.attn.c_attn.weight": "pytorch_model.bin",
177
+ "transformer.h.21.attn.c_proj.bias": "pytorch_model.bin",
178
+ "transformer.h.21.attn.c_proj.weight": "pytorch_model.bin",
179
+ "transformer.h.21.ln_1.bias": "pytorch_model.bin",
180
+ "transformer.h.21.ln_1.weight": "pytorch_model.bin",
181
+ "transformer.h.21.ln_2.bias": "pytorch_model.bin",
182
+ "transformer.h.21.ln_2.weight": "pytorch_model.bin",
183
+ "transformer.h.21.mlp.c_fc.bias": "pytorch_model.bin",
184
+ "transformer.h.21.mlp.c_fc.weight": "pytorch_model.bin",
185
+ "transformer.h.21.mlp.c_proj.bias": "pytorch_model.bin",
186
+ "transformer.h.21.mlp.c_proj.weight": "pytorch_model.bin",
187
+ "transformer.h.22.attn.c_attn.bias": "pytorch_model.bin",
188
+ "transformer.h.22.attn.c_attn.weight": "pytorch_model.bin",
189
+ "transformer.h.22.attn.c_proj.bias": "pytorch_model.bin",
190
+ "transformer.h.22.attn.c_proj.weight": "pytorch_model.bin",
191
+ "transformer.h.22.ln_1.bias": "pytorch_model.bin",
192
+ "transformer.h.22.ln_1.weight": "pytorch_model.bin",
193
+ "transformer.h.22.ln_2.bias": "pytorch_model.bin",
194
+ "transformer.h.22.ln_2.weight": "pytorch_model.bin",
195
+ "transformer.h.22.mlp.c_fc.bias": "pytorch_model.bin",
196
+ "transformer.h.22.mlp.c_fc.weight": "pytorch_model.bin",
197
+ "transformer.h.22.mlp.c_proj.bias": "pytorch_model.bin",
198
+ "transformer.h.22.mlp.c_proj.weight": "pytorch_model.bin",
199
+ "transformer.h.23.attn.c_attn.bias": "pytorch_model.bin",
200
+ "transformer.h.23.attn.c_attn.weight": "pytorch_model.bin",
201
+ "transformer.h.23.attn.c_proj.bias": "pytorch_model.bin",
202
+ "transformer.h.23.attn.c_proj.weight": "pytorch_model.bin",
203
+ "transformer.h.23.ln_1.bias": "pytorch_model.bin",
204
+ "transformer.h.23.ln_1.weight": "pytorch_model.bin",
205
+ "transformer.h.23.ln_2.bias": "pytorch_model.bin",
206
+ "transformer.h.23.ln_2.weight": "pytorch_model.bin",
207
+ "transformer.h.23.mlp.c_fc.bias": "pytorch_model.bin",
208
+ "transformer.h.23.mlp.c_fc.weight": "pytorch_model.bin",
209
+ "transformer.h.23.mlp.c_proj.bias": "pytorch_model.bin",
210
+ "transformer.h.23.mlp.c_proj.weight": "pytorch_model.bin",
211
+ "transformer.h.24.attn.c_attn.bias": "pytorch_model.bin",
212
+ "transformer.h.24.attn.c_attn.weight": "pytorch_model.bin",
213
+ "transformer.h.24.attn.c_proj.bias": "pytorch_model.bin",
214
+ "transformer.h.24.attn.c_proj.weight": "pytorch_model.bin",
215
+ "transformer.h.24.ln_1.bias": "pytorch_model.bin",
216
+ "transformer.h.24.ln_1.weight": "pytorch_model.bin",
217
+ "transformer.h.24.ln_2.bias": "pytorch_model.bin",
218
+ "transformer.h.24.ln_2.weight": "pytorch_model.bin",
219
+ "transformer.h.24.mlp.c_fc.bias": "pytorch_model.bin",
220
+ "transformer.h.24.mlp.c_fc.weight": "pytorch_model.bin",
221
+ "transformer.h.24.mlp.c_proj.bias": "pytorch_model.bin",
222
+ "transformer.h.24.mlp.c_proj.weight": "pytorch_model.bin",
223
+ "transformer.h.25.attn.c_attn.bias": "pytorch_model.bin",
224
+ "transformer.h.25.attn.c_attn.weight": "pytorch_model.bin",
225
+ "transformer.h.25.attn.c_proj.bias": "pytorch_model.bin",
226
+ "transformer.h.25.attn.c_proj.weight": "pytorch_model.bin",
227
+ "transformer.h.25.ln_1.bias": "pytorch_model.bin",
228
+ "transformer.h.25.ln_1.weight": "pytorch_model.bin",
229
+ "transformer.h.25.ln_2.bias": "pytorch_model.bin",
230
+ "transformer.h.25.ln_2.weight": "pytorch_model.bin",
231
+ "transformer.h.25.mlp.c_fc.bias": "pytorch_model.bin",
232
+ "transformer.h.25.mlp.c_fc.weight": "pytorch_model.bin",
233
+ "transformer.h.25.mlp.c_proj.bias": "pytorch_model.bin",
234
+ "transformer.h.25.mlp.c_proj.weight": "pytorch_model.bin",
235
+ "transformer.h.26.attn.c_attn.bias": "pytorch_model.bin",
236
+ "transformer.h.26.attn.c_attn.weight": "pytorch_model.bin",
237
+ "transformer.h.26.attn.c_proj.bias": "pytorch_model.bin",
238
+ "transformer.h.26.attn.c_proj.weight": "pytorch_model.bin",
239
+ "transformer.h.26.ln_1.bias": "pytorch_model.bin",
240
+ "transformer.h.26.ln_1.weight": "pytorch_model.bin",
241
+ "transformer.h.26.ln_2.bias": "pytorch_model.bin",
242
+ "transformer.h.26.ln_2.weight": "pytorch_model.bin",
243
+ "transformer.h.26.mlp.c_fc.bias": "pytorch_model.bin",
244
+ "transformer.h.26.mlp.c_fc.weight": "pytorch_model.bin",
245
+ "transformer.h.26.mlp.c_proj.bias": "pytorch_model.bin",
246
+ "transformer.h.26.mlp.c_proj.weight": "pytorch_model.bin",
247
+ "transformer.h.27.attn.c_attn.bias": "pytorch_model.bin",
248
+ "transformer.h.27.attn.c_attn.weight": "pytorch_model.bin",
249
+ "transformer.h.27.attn.c_proj.bias": "pytorch_model.bin",
250
+ "transformer.h.27.attn.c_proj.weight": "pytorch_model.bin",
251
+ "transformer.h.27.ln_1.bias": "pytorch_model.bin",
252
+ "transformer.h.27.ln_1.weight": "pytorch_model.bin",
253
+ "transformer.h.27.ln_2.bias": "pytorch_model.bin",
254
+ "transformer.h.27.ln_2.weight": "pytorch_model.bin",
255
+ "transformer.h.27.mlp.c_fc.bias": "pytorch_model.bin",
256
+ "transformer.h.27.mlp.c_fc.weight": "pytorch_model.bin",
257
+ "transformer.h.27.mlp.c_proj.bias": "pytorch_model.bin",
258
+ "transformer.h.27.mlp.c_proj.weight": "pytorch_model.bin",
259
+ "transformer.h.28.attn.c_attn.bias": "pytorch_model.bin",
260
+ "transformer.h.28.attn.c_attn.weight": "pytorch_model.bin",
261
+ "transformer.h.28.attn.c_proj.bias": "pytorch_model.bin",
262
+ "transformer.h.28.attn.c_proj.weight": "pytorch_model.bin",
263
+ "transformer.h.28.ln_1.bias": "pytorch_model.bin",
264
+ "transformer.h.28.ln_1.weight": "pytorch_model.bin",
265
+ "transformer.h.28.ln_2.bias": "pytorch_model.bin",
266
+ "transformer.h.28.ln_2.weight": "pytorch_model.bin",
267
+ "transformer.h.28.mlp.c_fc.bias": "pytorch_model.bin",
268
+ "transformer.h.28.mlp.c_fc.weight": "pytorch_model.bin",
269
+ "transformer.h.28.mlp.c_proj.bias": "pytorch_model.bin",
270
+ "transformer.h.28.mlp.c_proj.weight": "pytorch_model.bin",
271
+ "transformer.h.29.attn.c_attn.bias": "pytorch_model.bin",
272
+ "transformer.h.29.attn.c_attn.weight": "pytorch_model.bin",
273
+ "transformer.h.29.attn.c_proj.bias": "pytorch_model.bin",
274
+ "transformer.h.29.attn.c_proj.weight": "pytorch_model.bin",
275
+ "transformer.h.29.ln_1.bias": "pytorch_model.bin",
276
+ "transformer.h.29.ln_1.weight": "pytorch_model.bin",
277
+ "transformer.h.29.ln_2.bias": "pytorch_model.bin",
278
+ "transformer.h.29.ln_2.weight": "pytorch_model.bin",
279
+ "transformer.h.29.mlp.c_fc.bias": "pytorch_model.bin",
280
+ "transformer.h.29.mlp.c_fc.weight": "pytorch_model.bin",
281
+ "transformer.h.29.mlp.c_proj.bias": "pytorch_model.bin",
282
+ "transformer.h.29.mlp.c_proj.weight": "pytorch_model.bin",
283
+ "transformer.h.3.attn.c_attn.bias": "pytorch_model.bin",
284
+ "transformer.h.3.attn.c_attn.weight": "pytorch_model.bin",
285
+ "transformer.h.3.attn.c_proj.bias": "pytorch_model.bin",
286
+ "transformer.h.3.attn.c_proj.weight": "pytorch_model.bin",
287
+ "transformer.h.3.ln_1.bias": "pytorch_model.bin",
288
+ "transformer.h.3.ln_1.weight": "pytorch_model.bin",
289
+ "transformer.h.3.ln_2.bias": "pytorch_model.bin",
290
+ "transformer.h.3.ln_2.weight": "pytorch_model.bin",
291
+ "transformer.h.3.mlp.c_fc.bias": "pytorch_model.bin",
292
+ "transformer.h.3.mlp.c_fc.weight": "pytorch_model.bin",
293
+ "transformer.h.3.mlp.c_proj.bias": "pytorch_model.bin",
294
+ "transformer.h.3.mlp.c_proj.weight": "pytorch_model.bin",
295
+ "transformer.h.30.attn.c_attn.bias": "pytorch_model.bin",
296
+ "transformer.h.30.attn.c_attn.weight": "pytorch_model.bin",
297
+ "transformer.h.30.attn.c_proj.bias": "pytorch_model.bin",
298
+ "transformer.h.30.attn.c_proj.weight": "pytorch_model.bin",
299
+ "transformer.h.30.ln_1.bias": "pytorch_model.bin",
300
+ "transformer.h.30.ln_1.weight": "pytorch_model.bin",
301
+ "transformer.h.30.ln_2.bias": "pytorch_model.bin",
302
+ "transformer.h.30.ln_2.weight": "pytorch_model.bin",
303
+ "transformer.h.30.mlp.c_fc.bias": "pytorch_model.bin",
304
+ "transformer.h.30.mlp.c_fc.weight": "pytorch_model.bin",
305
+ "transformer.h.30.mlp.c_proj.bias": "pytorch_model.bin",
306
+ "transformer.h.30.mlp.c_proj.weight": "pytorch_model.bin",
307
+ "transformer.h.31.attn.c_attn.bias": "pytorch_model.bin",
308
+ "transformer.h.31.attn.c_attn.weight": "pytorch_model.bin",
309
+ "transformer.h.31.attn.c_proj.bias": "pytorch_model.bin",
310
+ "transformer.h.31.attn.c_proj.weight": "pytorch_model.bin",
311
+ "transformer.h.31.ln_1.bias": "pytorch_model.bin",
312
+ "transformer.h.31.ln_1.weight": "pytorch_model.bin",
313
+ "transformer.h.31.ln_2.bias": "pytorch_model.bin",
314
+ "transformer.h.31.ln_2.weight": "pytorch_model.bin",
315
+ "transformer.h.31.mlp.c_fc.bias": "pytorch_model.bin",
316
+ "transformer.h.31.mlp.c_fc.weight": "pytorch_model.bin",
317
+ "transformer.h.31.mlp.c_proj.bias": "pytorch_model.bin",
318
+ "transformer.h.31.mlp.c_proj.weight": "pytorch_model.bin",
319
+ "transformer.h.32.attn.c_attn.bias": "pytorch_model.bin",
320
+ "transformer.h.32.attn.c_attn.weight": "pytorch_model.bin",
321
+ "transformer.h.32.attn.c_proj.bias": "pytorch_model.bin",
322
+ "transformer.h.32.attn.c_proj.weight": "pytorch_model.bin",
323
+ "transformer.h.32.ln_1.bias": "pytorch_model.bin",
324
+ "transformer.h.32.ln_1.weight": "pytorch_model.bin",
325
+ "transformer.h.32.ln_2.bias": "pytorch_model.bin",
326
+ "transformer.h.32.ln_2.weight": "pytorch_model.bin",
327
+ "transformer.h.32.mlp.c_fc.bias": "pytorch_model.bin",
328
+ "transformer.h.32.mlp.c_fc.weight": "pytorch_model.bin",
329
+ "transformer.h.32.mlp.c_proj.bias": "pytorch_model.bin",
330
+ "transformer.h.32.mlp.c_proj.weight": "pytorch_model.bin",
331
+ "transformer.h.33.attn.c_attn.bias": "pytorch_model.bin",
332
+ "transformer.h.33.attn.c_attn.weight": "pytorch_model.bin",
333
+ "transformer.h.33.attn.c_proj.bias": "pytorch_model.bin",
334
+ "transformer.h.33.attn.c_proj.weight": "pytorch_model.bin",
335
+ "transformer.h.33.ln_1.bias": "pytorch_model.bin",
336
+ "transformer.h.33.ln_1.weight": "pytorch_model.bin",
337
+ "transformer.h.33.ln_2.bias": "pytorch_model.bin",
338
+ "transformer.h.33.ln_2.weight": "pytorch_model.bin",
339
+ "transformer.h.33.mlp.c_fc.bias": "pytorch_model.bin",
340
+ "transformer.h.33.mlp.c_fc.weight": "pytorch_model.bin",
341
+ "transformer.h.33.mlp.c_proj.bias": "pytorch_model.bin",
342
+ "transformer.h.33.mlp.c_proj.weight": "pytorch_model.bin",
343
+ "transformer.h.34.attn.c_attn.bias": "pytorch_model.bin",
344
+ "transformer.h.34.attn.c_attn.weight": "pytorch_model.bin",
345
+ "transformer.h.34.attn.c_proj.bias": "pytorch_model.bin",
346
+ "transformer.h.34.attn.c_proj.weight": "pytorch_model.bin",
347
+ "transformer.h.34.ln_1.bias": "pytorch_model.bin",
348
+ "transformer.h.34.ln_1.weight": "pytorch_model.bin",
349
+ "transformer.h.34.ln_2.bias": "pytorch_model.bin",
350
+ "transformer.h.34.ln_2.weight": "pytorch_model.bin",
351
+ "transformer.h.34.mlp.c_fc.bias": "pytorch_model.bin",
352
+ "transformer.h.34.mlp.c_fc.weight": "pytorch_model.bin",
353
+ "transformer.h.34.mlp.c_proj.bias": "pytorch_model.bin",
354
+ "transformer.h.34.mlp.c_proj.weight": "pytorch_model.bin",
355
+ "transformer.h.35.attn.c_attn.bias": "pytorch_model.bin",
356
+ "transformer.h.35.attn.c_attn.weight": "pytorch_model.bin",
357
+ "transformer.h.35.attn.c_proj.bias": "pytorch_model.bin",
358
+ "transformer.h.35.attn.c_proj.weight": "pytorch_model.bin",
359
+ "transformer.h.35.ln_1.bias": "pytorch_model.bin",
360
+ "transformer.h.35.ln_1.weight": "pytorch_model.bin",
361
+ "transformer.h.35.ln_2.bias": "pytorch_model.bin",
362
+ "transformer.h.35.ln_2.weight": "pytorch_model.bin",
363
+ "transformer.h.35.mlp.c_fc.bias": "pytorch_model.bin",
364
+ "transformer.h.35.mlp.c_fc.weight": "pytorch_model.bin",
365
+ "transformer.h.35.mlp.c_proj.bias": "pytorch_model.bin",
366
+ "transformer.h.35.mlp.c_proj.weight": "pytorch_model.bin",
367
+ "transformer.h.36.attn.c_attn.bias": "pytorch_model.bin",
368
+ "transformer.h.36.attn.c_attn.weight": "pytorch_model.bin",
369
+ "transformer.h.36.attn.c_proj.bias": "pytorch_model.bin",
370
+ "transformer.h.36.attn.c_proj.weight": "pytorch_model.bin",
371
+ "transformer.h.36.ln_1.bias": "pytorch_model.bin",
372
+ "transformer.h.36.ln_1.weight": "pytorch_model.bin",
373
+ "transformer.h.36.ln_2.bias": "pytorch_model.bin",
374
+ "transformer.h.36.ln_2.weight": "pytorch_model.bin",
375
+ "transformer.h.36.mlp.c_fc.bias": "pytorch_model.bin",
376
+ "transformer.h.36.mlp.c_fc.weight": "pytorch_model.bin",
377
+ "transformer.h.36.mlp.c_proj.bias": "pytorch_model.bin",
378
+ "transformer.h.36.mlp.c_proj.weight": "pytorch_model.bin",
379
+ "transformer.h.37.attn.c_attn.bias": "pytorch_model.bin",
380
+ "transformer.h.37.attn.c_attn.weight": "pytorch_model.bin",
381
+ "transformer.h.37.attn.c_proj.bias": "pytorch_model.bin",
382
+ "transformer.h.37.attn.c_proj.weight": "pytorch_model.bin",
383
+ "transformer.h.37.ln_1.bias": "pytorch_model.bin",
384
+ "transformer.h.37.ln_1.weight": "pytorch_model.bin",
385
+ "transformer.h.37.ln_2.bias": "pytorch_model.bin",
386
+ "transformer.h.37.ln_2.weight": "pytorch_model.bin",
387
+ "transformer.h.37.mlp.c_fc.bias": "pytorch_model.bin",
388
+ "transformer.h.37.mlp.c_fc.weight": "pytorch_model.bin",
389
+ "transformer.h.37.mlp.c_proj.bias": "pytorch_model.bin",
390
+ "transformer.h.37.mlp.c_proj.weight": "pytorch_model.bin",
391
+ "transformer.h.38.attn.c_attn.bias": "pytorch_model.bin",
392
+ "transformer.h.38.attn.c_attn.weight": "pytorch_model.bin",
393
+ "transformer.h.38.attn.c_proj.bias": "pytorch_model.bin",
394
+ "transformer.h.38.attn.c_proj.weight": "pytorch_model.bin",
395
+ "transformer.h.38.ln_1.bias": "pytorch_model.bin",
396
+ "transformer.h.38.ln_1.weight": "pytorch_model.bin",
397
+ "transformer.h.38.ln_2.bias": "pytorch_model.bin",
398
+ "transformer.h.38.ln_2.weight": "pytorch_model.bin",
399
+ "transformer.h.38.mlp.c_fc.bias": "pytorch_model.bin",
400
+ "transformer.h.38.mlp.c_fc.weight": "pytorch_model.bin",
401
+ "transformer.h.38.mlp.c_proj.bias": "pytorch_model.bin",
402
+ "transformer.h.38.mlp.c_proj.weight": "pytorch_model.bin",
403
+ "transformer.h.39.attn.c_attn.bias": "pytorch_model.bin",
404
+ "transformer.h.39.attn.c_attn.weight": "pytorch_model.bin",
405
+ "transformer.h.39.attn.c_proj.bias": "pytorch_model.bin",
406
+ "transformer.h.39.attn.c_proj.weight": "pytorch_model.bin",
407
+ "transformer.h.39.ln_1.bias": "pytorch_model.bin",
408
+ "transformer.h.39.ln_1.weight": "pytorch_model.bin",
409
+ "transformer.h.39.ln_2.bias": "pytorch_model.bin",
410
+ "transformer.h.39.ln_2.weight": "pytorch_model.bin",
411
+ "transformer.h.39.mlp.c_fc.bias": "pytorch_model.bin",
412
+ "transformer.h.39.mlp.c_fc.weight": "pytorch_model.bin",
413
+ "transformer.h.39.mlp.c_proj.bias": "pytorch_model.bin",
414
+ "transformer.h.39.mlp.c_proj.weight": "pytorch_model.bin",
415
+ "transformer.h.4.attn.c_attn.bias": "pytorch_model.bin",
416
+ "transformer.h.4.attn.c_attn.weight": "pytorch_model.bin",
417
+ "transformer.h.4.attn.c_proj.bias": "pytorch_model.bin",
418
+ "transformer.h.4.attn.c_proj.weight": "pytorch_model.bin",
419
+ "transformer.h.4.ln_1.bias": "pytorch_model.bin",
420
+ "transformer.h.4.ln_1.weight": "pytorch_model.bin",
421
+ "transformer.h.4.ln_2.bias": "pytorch_model.bin",
422
+ "transformer.h.4.ln_2.weight": "pytorch_model.bin",
423
+ "transformer.h.4.mlp.c_fc.bias": "pytorch_model.bin",
424
+ "transformer.h.4.mlp.c_fc.weight": "pytorch_model.bin",
425
+ "transformer.h.4.mlp.c_proj.bias": "pytorch_model.bin",
426
+ "transformer.h.4.mlp.c_proj.weight": "pytorch_model.bin",
427
+ "transformer.h.5.attn.c_attn.bias": "pytorch_model.bin",
428
+ "transformer.h.5.attn.c_attn.weight": "pytorch_model.bin",
429
+ "transformer.h.5.attn.c_proj.bias": "pytorch_model.bin",
430
+ "transformer.h.5.attn.c_proj.weight": "pytorch_model.bin",
431
+ "transformer.h.5.ln_1.bias": "pytorch_model.bin",
432
+ "transformer.h.5.ln_1.weight": "pytorch_model.bin",
433
+ "transformer.h.5.ln_2.bias": "pytorch_model.bin",
434
+ "transformer.h.5.ln_2.weight": "pytorch_model.bin",
435
+ "transformer.h.5.mlp.c_fc.bias": "pytorch_model.bin",
436
+ "transformer.h.5.mlp.c_fc.weight": "pytorch_model.bin",
437
+ "transformer.h.5.mlp.c_proj.bias": "pytorch_model.bin",
438
+ "transformer.h.5.mlp.c_proj.weight": "pytorch_model.bin",
439
+ "transformer.h.6.attn.c_attn.bias": "pytorch_model.bin",
440
+ "transformer.h.6.attn.c_attn.weight": "pytorch_model.bin",
441
+ "transformer.h.6.attn.c_proj.bias": "pytorch_model.bin",
442
+ "transformer.h.6.attn.c_proj.weight": "pytorch_model.bin",
443
+ "transformer.h.6.ln_1.bias": "pytorch_model.bin",
444
+ "transformer.h.6.ln_1.weight": "pytorch_model.bin",
445
+ "transformer.h.6.ln_2.bias": "pytorch_model.bin",
446
+ "transformer.h.6.ln_2.weight": "pytorch_model.bin",
447
+ "transformer.h.6.mlp.c_fc.bias": "pytorch_model.bin",
448
+ "transformer.h.6.mlp.c_fc.weight": "pytorch_model.bin",
449
+ "transformer.h.6.mlp.c_proj.bias": "pytorch_model.bin",
450
+ "transformer.h.6.mlp.c_proj.weight": "pytorch_model.bin",
451
+ "transformer.h.7.attn.c_attn.bias": "pytorch_model.bin",
452
+ "transformer.h.7.attn.c_attn.weight": "pytorch_model.bin",
453
+ "transformer.h.7.attn.c_proj.bias": "pytorch_model.bin",
454
+ "transformer.h.7.attn.c_proj.weight": "pytorch_model.bin",
455
+ "transformer.h.7.ln_1.bias": "pytorch_model.bin",
456
+ "transformer.h.7.ln_1.weight": "pytorch_model.bin",
457
+ "transformer.h.7.ln_2.bias": "pytorch_model.bin",
458
+ "transformer.h.7.ln_2.weight": "pytorch_model.bin",
459
+ "transformer.h.7.mlp.c_fc.bias": "pytorch_model.bin",
460
+ "transformer.h.7.mlp.c_fc.weight": "pytorch_model.bin",
461
+ "transformer.h.7.mlp.c_proj.bias": "pytorch_model.bin",
462
+ "transformer.h.7.mlp.c_proj.weight": "pytorch_model.bin",
463
+ "transformer.h.8.attn.c_attn.bias": "pytorch_model.bin",
464
+ "transformer.h.8.attn.c_attn.weight": "pytorch_model.bin",
465
+ "transformer.h.8.attn.c_proj.bias": "pytorch_model.bin",
466
+ "transformer.h.8.attn.c_proj.weight": "pytorch_model.bin",
467
+ "transformer.h.8.ln_1.bias": "pytorch_model.bin",
468
+ "transformer.h.8.ln_1.weight": "pytorch_model.bin",
469
+ "transformer.h.8.ln_2.bias": "pytorch_model.bin",
470
+ "transformer.h.8.ln_2.weight": "pytorch_model.bin",
471
+ "transformer.h.8.mlp.c_fc.bias": "pytorch_model.bin",
472
+ "transformer.h.8.mlp.c_fc.weight": "pytorch_model.bin",
473
+ "transformer.h.8.mlp.c_proj.bias": "pytorch_model.bin",
474
+ "transformer.h.8.mlp.c_proj.weight": "pytorch_model.bin",
475
+ "transformer.h.9.attn.c_attn.bias": "pytorch_model.bin",
476
+ "transformer.h.9.attn.c_attn.weight": "pytorch_model.bin",
477
+ "transformer.h.9.attn.c_proj.bias": "pytorch_model.bin",
478
+ "transformer.h.9.attn.c_proj.weight": "pytorch_model.bin",
479
+ "transformer.h.9.ln_1.bias": "pytorch_model.bin",
480
+ "transformer.h.9.ln_1.weight": "pytorch_model.bin",
481
+ "transformer.h.9.ln_2.bias": "pytorch_model.bin",
482
+ "transformer.h.9.ln_2.weight": "pytorch_model.bin",
483
+ "transformer.h.9.mlp.c_fc.bias": "pytorch_model.bin",
484
+ "transformer.h.9.mlp.c_fc.weight": "pytorch_model.bin",
485
+ "transformer.h.9.mlp.c_proj.bias": "pytorch_model.bin",
486
+ "transformer.h.9.mlp.c_proj.weight": "pytorch_model.bin",
487
+ "transformer.ln_f.bias": "pytorch_model.bin",
488
+ "transformer.ln_f.weight": "pytorch_model.bin",
489
+ "transformer.wpe.weight": "pytorch_model.bin",
490
+ "transformer.wte.weight": "pytorch_model.bin"
491
+ }
492
+ }
special_tokens_map.json ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "additional_special_tokens": [
3
+ "<|endoftext|>",
4
+ "<fim_prefix>",
5
+ "<fim_middle>",
6
+ "<fim_suffix>",
7
+ "<fim_pad>",
8
+ "<filename>",
9
+ "<gh_stars>",
10
+ "<issue_start>",
11
+ "<issue_comment>",
12
+ "<issue_closed>",
13
+ "<jupyter_start>",
14
+ "<jupyter_text>",
15
+ "<jupyter_code>",
16
+ "<jupyter_output>",
17
+ "<empty_output>",
18
+ "<commit_before>",
19
+ "<commit_msg>",
20
+ "<commit_after>",
21
+ "<reponame>"
22
+ ],
23
+ "bos_token": "<|endoftext|>",
24
+ "eos_token": "<|endoftext|>",
25
+ "unk_token": "<|endoftext|>"
26
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,30 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "additional_special_tokens": [
4
+ "<|endoftext|>",
5
+ "<fim_prefix>",
6
+ "<fim_middle>",
7
+ "<fim_suffix>",
8
+ "<fim_pad>",
9
+ "<filename>",
10
+ "<gh_stars>",
11
+ "<issue_start>",
12
+ "<issue_comment>",
13
+ "<issue_closed>",
14
+ "<jupyter_start>",
15
+ "<jupyter_text>",
16
+ "<jupyter_code>",
17
+ "<jupyter_output>",
18
+ "<empty_output>",
19
+ "<commit_before>",
20
+ "<commit_msg>",
21
+ "<commit_after>",
22
+ "<reponame>"
23
+ ],
24
+ "bos_token": "<|endoftext|>",
25
+ "eos_token": "<|endoftext|>",
26
+ "model_max_length": 1000000000000000019884624838656,
27
+ "tokenizer_class": "GPT2Tokenizer",
28
+ "unk_token": "<|endoftext|>",
29
+ "vocab_size": 49152
30
+ }
vocab.json ADDED
The diff for this file is too large to render. See raw diff