Pageee commited on
Commit
d54338b
1 Parent(s): 9b0c07b

End of training

Browse files
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ base_model: distil-small.en
5
+ tags:
6
+ - generated_from_trainer
7
+ datasets:
8
+ - librispeech_asr
9
+ metrics:
10
+ - wer
11
+ model-index:
12
+ - name: DistilFT-English-10m
13
+ results:
14
+ - task:
15
+ name: Automatic Speech Recognition
16
+ type: automatic-speech-recognition
17
+ dataset:
18
+ name: librispeech
19
+ type: librispeech_asr
20
+ config: default
21
+ split: None
22
+ args: 'config: en, split: test-clean'
23
+ metrics:
24
+ - name: Wer
25
+ type: wer
26
+ value: 3.5814019853645607
27
+ ---
28
+
29
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
30
+ should probably proofread and complete it, then remove this comment. -->
31
+
32
+ # DistilFT-English-10m
33
+
34
+ This model is a fine-tuned version of [distil-small.en](https://huggingface.co/distil-small.en) on the librispeech dataset.
35
+ It achieves the following results on the evaluation set:
36
+ - Loss: 0.5012
37
+ - Wer: 3.5814
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 5e-07
57
+ - train_batch_size: 8
58
+ - eval_batch_size: 8
59
+ - seed: 42
60
+ - gradient_accumulation_steps: 2
61
+ - total_train_batch_size: 16
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - lr_scheduler_warmup_steps: 300
65
+ - training_steps: 1000
66
+ - mixed_precision_training: Native AMP
67
+
68
+ ### Training results
69
+
70
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
71
+ |:-------------:|:--------:|:----:|:---------------:|:------:|
72
+ | 0.5641 | 33.3333 | 100 | 0.9641 | 3.4754 |
73
+ | 0.3271 | 66.6667 | 200 | 0.7822 | 3.4652 |
74
+ | 0.0871 | 100.0 | 300 | 0.5731 | 3.4530 |
75
+ | 0.0149 | 133.3333 | 400 | 0.5142 | 3.4774 |
76
+ | 0.0043 | 166.6667 | 500 | 0.5051 | 3.5345 |
77
+ | 0.0026 | 200.0 | 600 | 0.5030 | 3.5569 |
78
+ | 0.002 | 233.3333 | 700 | 0.5020 | 3.5671 |
79
+ | 0.0016 | 266.6667 | 800 | 0.5015 | 3.5773 |
80
+ | 0.0014 | 300.0 | 900 | 0.5013 | 3.5936 |
81
+ | 0.0014 | 333.3333 | 1000 | 0.5012 | 3.5814 |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.41.0.dev0
87
+ - Pytorch 2.3.0+cu121
88
+ - Datasets 2.19.1
89
+ - Tokenizers 0.19.1
generation_config.json ADDED
@@ -0,0 +1,190 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alignment_heads": [
3
+ [
4
+ 6,
5
+ 6
6
+ ],
7
+ [
8
+ 7,
9
+ 0
10
+ ],
11
+ [
12
+ 7,
13
+ 3
14
+ ],
15
+ [
16
+ 7,
17
+ 8
18
+ ],
19
+ [
20
+ 8,
21
+ 2
22
+ ],
23
+ [
24
+ 8,
25
+ 5
26
+ ],
27
+ [
28
+ 8,
29
+ 7
30
+ ],
31
+ [
32
+ 9,
33
+ 0
34
+ ],
35
+ [
36
+ 9,
37
+ 4
38
+ ],
39
+ [
40
+ 9,
41
+ 8
42
+ ],
43
+ [
44
+ 9,
45
+ 10
46
+ ],
47
+ [
48
+ 10,
49
+ 0
50
+ ],
51
+ [
52
+ 10,
53
+ 1
54
+ ],
55
+ [
56
+ 10,
57
+ 2
58
+ ],
59
+ [
60
+ 10,
61
+ 3
62
+ ],
63
+ [
64
+ 10,
65
+ 6
66
+ ],
67
+ [
68
+ 10,
69
+ 11
70
+ ],
71
+ [
72
+ 11,
73
+ 2
74
+ ],
75
+ [
76
+ 11,
77
+ 4
78
+ ]
79
+ ],
80
+ "begin_suppress_tokens": [
81
+ 220,
82
+ 50256
83
+ ],
84
+ "bos_token_id": 50257,
85
+ "decoder_start_token_id": 50257,
86
+ "eos_token_id": 50256,
87
+ "is_multilingual": false,
88
+ "language": null,
89
+ "max_initial_timestamp_index": 50,
90
+ "max_length": 448,
91
+ "no_timestamps_token_id": 50362,
92
+ "pad_token_id": 50256,
93
+ "prev_sot_token_id": 50360,
94
+ "return_timestamps": false,
95
+ "suppress_tokens": [
96
+ 1,
97
+ 2,
98
+ 7,
99
+ 8,
100
+ 9,
101
+ 10,
102
+ 14,
103
+ 25,
104
+ 26,
105
+ 27,
106
+ 28,
107
+ 29,
108
+ 31,
109
+ 58,
110
+ 59,
111
+ 60,
112
+ 61,
113
+ 62,
114
+ 63,
115
+ 90,
116
+ 91,
117
+ 92,
118
+ 93,
119
+ 357,
120
+ 366,
121
+ 438,
122
+ 532,
123
+ 685,
124
+ 705,
125
+ 796,
126
+ 930,
127
+ 1058,
128
+ 1220,
129
+ 1267,
130
+ 1279,
131
+ 1303,
132
+ 1343,
133
+ 1377,
134
+ 1391,
135
+ 1635,
136
+ 1782,
137
+ 1875,
138
+ 2162,
139
+ 2361,
140
+ 2488,
141
+ 3467,
142
+ 4008,
143
+ 4211,
144
+ 4600,
145
+ 4808,
146
+ 5299,
147
+ 5855,
148
+ 6329,
149
+ 7203,
150
+ 9609,
151
+ 9959,
152
+ 10563,
153
+ 10786,
154
+ 11420,
155
+ 11709,
156
+ 11907,
157
+ 13163,
158
+ 13697,
159
+ 13700,
160
+ 14808,
161
+ 15306,
162
+ 16410,
163
+ 16791,
164
+ 17992,
165
+ 19203,
166
+ 19510,
167
+ 20724,
168
+ 22305,
169
+ 22935,
170
+ 27007,
171
+ 30109,
172
+ 30420,
173
+ 33409,
174
+ 34949,
175
+ 40283,
176
+ 40493,
177
+ 40549,
178
+ 47282,
179
+ 49146,
180
+ 50257,
181
+ 50357,
182
+ 50358,
183
+ 50359,
184
+ 50360,
185
+ 50361
186
+ ],
187
+ "task": null,
188
+ "transformers_version": "4.41.0.dev0",
189
+ "use_scan": false
190
+ }
runs/May31_20-59-33_a100gpu2/events.out.tfevents.1717182104.a100gpu2.4074284.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:24d232d54d10bbf02b65595f7a38d048c18bf45960799dc96d6553f370423a24
3
- size 12362
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c4e4b367c23b5821e12241800977d63aed1f9c1f5d76fc40e2bdd6f7548409b
3
+ size 18526