mlabonne commited on
Commit
011fd6f
1 Parent(s): a9db78c

End of training

Browse files
README.md ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: EleutherAI/pythia-70m-deduped
4
+ tags:
5
+ - generated_from_trainer
6
+ model-index:
7
+ - name: results
8
+ results: []
9
+ ---
10
+
11
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
12
+ should probably proofread and complete it, then remove this comment. -->
13
+
14
+ # results
15
+
16
+ This model is a fine-tuned version of [EleutherAI/pythia-70m-deduped](https://huggingface.co/EleutherAI/pythia-70m-deduped) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 1.2691
19
+
20
+ ## Model description
21
+
22
+ More information needed
23
+
24
+ ## Intended uses & limitations
25
+
26
+ More information needed
27
+
28
+ ## Training and evaluation data
29
+
30
+ More information needed
31
+
32
+ ## Training procedure
33
+
34
+ ### Training hyperparameters
35
+
36
+ The following hyperparameters were used during training:
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 100
39
+ - eval_batch_size: 100
40
+ - seed: 42
41
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
+ - lr_scheduler_type: cosine
43
+ - num_epochs: 5
44
+
45
+ ### Training results
46
+
47
+ | Training Loss | Epoch | Step | Validation Loss |
48
+ |:-------------:|:-----:|:----:|:---------------:|
49
+ | 2.852 | 0.1 | 1 | 3.1074 |
50
+ | 3.0923 | 0.2 | 2 | 2.3879 |
51
+ | 2.3371 | 0.3 | 3 | 2.1025 |
52
+ | 2.1166 | 0.4 | 4 | 1.9761 |
53
+ | 2.0538 | 0.5 | 5 | 1.8446 |
54
+ | 1.8972 | 0.6 | 6 | 1.7470 |
55
+ | 1.8356 | 0.7 | 7 | 1.6615 |
56
+ | 1.702 | 0.8 | 8 | 1.6187 |
57
+ | 1.6907 | 0.9 | 9 | 1.6626 |
58
+ | 1.5877 | 1.0 | 10 | 1.6192 |
59
+ | 1.6332 | 1.1 | 11 | 1.5464 |
60
+ | 1.4906 | 1.2 | 12 | 1.5091 |
61
+ | 1.5267 | 1.3 | 13 | 1.4850 |
62
+ | 1.4857 | 1.4 | 14 | 1.4572 |
63
+ | 1.4247 | 1.5 | 15 | 1.4319 |
64
+ | 1.4815 | 1.6 | 16 | 1.4207 |
65
+ | 1.3584 | 1.7 | 17 | 1.4092 |
66
+ | 1.4812 | 1.8 | 18 | 1.4196 |
67
+ | 1.4381 | 1.9 | 19 | 1.4021 |
68
+ | 1.453 | 2.0 | 20 | 1.4013 |
69
+ | 1.3468 | 2.1 | 21 | 1.3781 |
70
+ | 1.3327 | 2.2 | 22 | 1.3598 |
71
+ | 1.3623 | 2.3 | 23 | 1.3516 |
72
+ | 1.2876 | 2.4 | 24 | 1.3384 |
73
+ | 1.374 | 2.5 | 25 | 1.3366 |
74
+ | 1.3863 | 2.6 | 26 | 1.3265 |
75
+ | 1.3327 | 2.7 | 27 | 1.3186 |
76
+ | 1.2886 | 2.8 | 28 | 1.3130 |
77
+ | 1.3842 | 2.9 | 29 | 1.3024 |
78
+ | 1.3105 | 3.0 | 30 | 1.2986 |
79
+ | 1.2331 | 3.1 | 31 | 1.2966 |
80
+ | 1.3227 | 3.2 | 32 | 1.2954 |
81
+ | 1.2923 | 3.3 | 33 | 1.2928 |
82
+ | 1.2976 | 3.4 | 34 | 1.2901 |
83
+ | 1.3207 | 3.5 | 35 | 1.2879 |
84
+ | 1.2455 | 3.6 | 36 | 1.2834 |
85
+ | 1.2546 | 3.7 | 37 | 1.2779 |
86
+ | 1.2999 | 3.8 | 38 | 1.2744 |
87
+ | 1.2484 | 3.9 | 39 | 1.2723 |
88
+ | 1.281 | 4.0 | 40 | 1.2720 |
89
+ | 1.2134 | 4.1 | 41 | 1.2722 |
90
+ | 1.214 | 4.2 | 42 | 1.2721 |
91
+ | 1.3031 | 4.3 | 43 | 1.2715 |
92
+ | 1.2174 | 4.4 | 44 | 1.2708 |
93
+ | 1.2359 | 4.5 | 45 | 1.2703 |
94
+ | 1.2578 | 4.6 | 46 | 1.2699 |
95
+ | 1.2815 | 4.7 | 47 | 1.2695 |
96
+ | 1.2866 | 4.8 | 48 | 1.2693 |
97
+ | 1.2878 | 4.9 | 49 | 1.2691 |
98
+ | 1.2214 | 5.0 | 50 | 1.2691 |
99
+
100
+
101
+ ### Framework versions
102
+
103
+ - Transformers 4.35.2
104
+ - Pytorch 2.1.0+cu121
105
+ - Datasets 2.16.1
106
+ - Tokenizers 0.15.0
config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "EleutherAI/pythia-70m-deduped",
3
+ "architectures": [
4
+ "GPTNeoXForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "classifier_dropout": 0.1,
9
+ "eos_token_id": 0,
10
+ "hidden_act": "gelu",
11
+ "hidden_dropout": 0.0,
12
+ "hidden_size": 512,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 2048,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 2048,
17
+ "model_type": "gpt_neox",
18
+ "num_attention_heads": 8,
19
+ "num_hidden_layers": 6,
20
+ "rope_scaling": null,
21
+ "rotary_emb_base": 10000,
22
+ "rotary_pct": 0.25,
23
+ "tie_word_embeddings": false,
24
+ "torch_dtype": "float32",
25
+ "transformers_version": "4.35.2",
26
+ "use_cache": true,
27
+ "use_parallel_residual": true,
28
+ "vocab_size": 50304
29
+ }
generation_config.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "_from_model_config": true,
3
+ "bos_token_id": 0,
4
+ "eos_token_id": 0,
5
+ "transformers_version": "4.35.2"
6
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa11ba5d2681e36480f82c6b83fc1edcf601b2f0ccff2aafcad10b0a6da2d52c
3
+ size 281715176
special_tokens_map.json ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "<|endoftext|>",
3
+ "eos_token": "<|endoftext|>",
4
+ "pad_token": "<|endoftext|>",
5
+ "unk_token": "<|endoftext|>"
6
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,212 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "added_tokens_decoder": {
4
+ "0": {
5
+ "content": "<|endoftext|>",
6
+ "lstrip": false,
7
+ "normalized": false,
8
+ "rstrip": false,
9
+ "single_word": false,
10
+ "special": true
11
+ },
12
+ "1": {
13
+ "content": "<|padding|>",
14
+ "lstrip": false,
15
+ "normalized": false,
16
+ "rstrip": false,
17
+ "single_word": false,
18
+ "special": true
19
+ },
20
+ "50254": {
21
+ "content": " ",
22
+ "lstrip": false,
23
+ "normalized": true,
24
+ "rstrip": false,
25
+ "single_word": false,
26
+ "special": false
27
+ },
28
+ "50255": {
29
+ "content": " ",
30
+ "lstrip": false,
31
+ "normalized": true,
32
+ "rstrip": false,
33
+ "single_word": false,
34
+ "special": false
35
+ },
36
+ "50256": {
37
+ "content": " ",
38
+ "lstrip": false,
39
+ "normalized": true,
40
+ "rstrip": false,
41
+ "single_word": false,
42
+ "special": false
43
+ },
44
+ "50257": {
45
+ "content": " ",
46
+ "lstrip": false,
47
+ "normalized": true,
48
+ "rstrip": false,
49
+ "single_word": false,
50
+ "special": false
51
+ },
52
+ "50258": {
53
+ "content": " ",
54
+ "lstrip": false,
55
+ "normalized": true,
56
+ "rstrip": false,
57
+ "single_word": false,
58
+ "special": false
59
+ },
60
+ "50259": {
61
+ "content": " ",
62
+ "lstrip": false,
63
+ "normalized": true,
64
+ "rstrip": false,
65
+ "single_word": false,
66
+ "special": false
67
+ },
68
+ "50260": {
69
+ "content": " ",
70
+ "lstrip": false,
71
+ "normalized": true,
72
+ "rstrip": false,
73
+ "single_word": false,
74
+ "special": false
75
+ },
76
+ "50261": {
77
+ "content": " ",
78
+ "lstrip": false,
79
+ "normalized": true,
80
+ "rstrip": false,
81
+ "single_word": false,
82
+ "special": false
83
+ },
84
+ "50262": {
85
+ "content": " ",
86
+ "lstrip": false,
87
+ "normalized": true,
88
+ "rstrip": false,
89
+ "single_word": false,
90
+ "special": false
91
+ },
92
+ "50263": {
93
+ "content": " ",
94
+ "lstrip": false,
95
+ "normalized": true,
96
+ "rstrip": false,
97
+ "single_word": false,
98
+ "special": false
99
+ },
100
+ "50264": {
101
+ "content": " ",
102
+ "lstrip": false,
103
+ "normalized": true,
104
+ "rstrip": false,
105
+ "single_word": false,
106
+ "special": false
107
+ },
108
+ "50265": {
109
+ "content": " ",
110
+ "lstrip": false,
111
+ "normalized": true,
112
+ "rstrip": false,
113
+ "single_word": false,
114
+ "special": false
115
+ },
116
+ "50266": {
117
+ "content": " ",
118
+ "lstrip": false,
119
+ "normalized": true,
120
+ "rstrip": false,
121
+ "single_word": false,
122
+ "special": false
123
+ },
124
+ "50267": {
125
+ "content": " ",
126
+ "lstrip": false,
127
+ "normalized": true,
128
+ "rstrip": false,
129
+ "single_word": false,
130
+ "special": false
131
+ },
132
+ "50268": {
133
+ "content": " ",
134
+ "lstrip": false,
135
+ "normalized": true,
136
+ "rstrip": false,
137
+ "single_word": false,
138
+ "special": false
139
+ },
140
+ "50269": {
141
+ "content": " ",
142
+ "lstrip": false,
143
+ "normalized": true,
144
+ "rstrip": false,
145
+ "single_word": false,
146
+ "special": false
147
+ },
148
+ "50270": {
149
+ "content": " ",
150
+ "lstrip": false,
151
+ "normalized": true,
152
+ "rstrip": false,
153
+ "single_word": false,
154
+ "special": false
155
+ },
156
+ "50271": {
157
+ "content": " ",
158
+ "lstrip": false,
159
+ "normalized": true,
160
+ "rstrip": false,
161
+ "single_word": false,
162
+ "special": false
163
+ },
164
+ "50272": {
165
+ "content": " ",
166
+ "lstrip": false,
167
+ "normalized": true,
168
+ "rstrip": false,
169
+ "single_word": false,
170
+ "special": false
171
+ },
172
+ "50273": {
173
+ "content": " ",
174
+ "lstrip": false,
175
+ "normalized": true,
176
+ "rstrip": false,
177
+ "single_word": false,
178
+ "special": false
179
+ },
180
+ "50274": {
181
+ "content": " ",
182
+ "lstrip": false,
183
+ "normalized": true,
184
+ "rstrip": false,
185
+ "single_word": false,
186
+ "special": false
187
+ },
188
+ "50275": {
189
+ "content": " ",
190
+ "lstrip": false,
191
+ "normalized": true,
192
+ "rstrip": false,
193
+ "single_word": false,
194
+ "special": false
195
+ },
196
+ "50276": {
197
+ "content": " ",
198
+ "lstrip": false,
199
+ "normalized": true,
200
+ "rstrip": false,
201
+ "single_word": false,
202
+ "special": false
203
+ }
204
+ },
205
+ "bos_token": "<|endoftext|>",
206
+ "clean_up_tokenization_spaces": true,
207
+ "eos_token": "<|endoftext|>",
208
+ "model_max_length": 1000000000000000019884624838656,
209
+ "pad_token": "<|endoftext|>",
210
+ "tokenizer_class": "GPTNeoXTokenizer",
211
+ "unk_token": "<|endoftext|>"
212
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1df912fb63d8e5174ea540eec03bf8bc75d4c1aa4089da2a600316cf27f2d9be
3
+ size 4536