Plim commited on
Commit
e2f6d01
1 Parent(s): 89ae304

Training in progress, step 14000

Browse files
Files changed (32) hide show
  1. .gitattributes +1 -0
  2. .ipynb_checkpoints/README-checkpoint.md +105 -0
  3. .ipynb_checkpoints/create_lm-checkpoint.ipynb +309 -0
  4. .ipynb_checkpoints/log_mozilla-foundation_common_voice_8_0_fr_test_predictions-checkpoint.txt +0 -0
  5. .ipynb_checkpoints/log_mozilla-foundation_common_voice_8_0_fr_test_targets-checkpoint.txt +0 -0
  6. .ipynb_checkpoints/log_speech-recognition-community-v2_dev_data_fr_validation_predictions-checkpoint.txt +0 -0
  7. .ipynb_checkpoints/log_speech-recognition-community-v2_dev_data_fr_validation_targets-checkpoint.txt +0 -0
  8. .ipynb_checkpoints/mozilla-foundation_common_voice_8_0_fr_test_eval_results-checkpoint.txt +2 -0
  9. .ipynb_checkpoints/preprocessor_config-checkpoint.json +10 -0
  10. .ipynb_checkpoints/run-checkpoint.sh +2 -2
  11. alphabet.json +1 -0
  12. config.json +1 -1
  13. create_lm.ipynb +344 -0
  14. keep_model/pytorch_model.bin +3 -0
  15. langague_model/5gram.bin +3 -0
  16. langague_model/attrs.json +1 -0
  17. langague_model/unigrams.txt +0 -0
  18. pytorch_model.bin +1 -1
  19. run.sh +2 -2
  20. training_args.bin +1 -1
  21. wandb/debug-internal.log +1 -1
  22. wandb/debug.log +1 -1
  23. wandb/latest-run +1 -1
  24. wandb/run-20220206_201634-uhiy9e2t/files/conda-environment.yaml +0 -0
  25. wandb/run-20220206_201634-uhiy9e2t/files/config.yaml +0 -0
  26. wandb/run-20220206_201634-uhiy9e2t/files/output.log +1491 -0
  27. wandb/run-20220206_201634-uhiy9e2t/files/requirements.txt +183 -0
  28. wandb/run-20220206_201634-uhiy9e2t/files/wandb-metadata.json +61 -0
  29. wandb/run-20220206_201634-uhiy9e2t/files/wandb-summary.json +0 -0
  30. wandb/run-20220206_201634-uhiy9e2t/logs/debug-internal.log +0 -0
  31. wandb/run-20220206_201634-uhiy9e2t/logs/debug.log +26 -0
  32. wandb/run-20220206_201634-uhiy9e2t/run-uhiy9e2t.wandb +3 -0
.gitattributes CHANGED
@@ -26,3 +26,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
  wandb/run-20220203_170643-2fkfdtzb/run-2fkfdtzb.wandb filter=lfs diff=lfs merge=lfs -text
 
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
  wandb/run-20220203_170643-2fkfdtzb/run-2fkfdtzb.wandb filter=lfs diff=lfs merge=lfs -text
29
+ wandb/run-20220206_201634-uhiy9e2t/run-uhiy9e2t.wandb filter=lfs diff=lfs merge=lfs -text
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,105 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - fr
4
+ license: apache-2.0
5
+ tags:
6
+ - automatic-speech-recognition
7
+ - mozilla-foundation/common_voice_8_0
8
+ - generated_from_trainer
9
+ - robust-speech-event
10
+ model-index:
11
+ - name: XLS-R-1B - French
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: Common Voice 8
18
+ type: mozilla-foundation/common_voice_8_0
19
+ args: fr
20
+ metrics:
21
+ - name: Test WER
22
+ type: wer
23
+ value: 18.33
24
+ - name: Test CER
25
+ type: cer
26
+ value: 5.60
27
+ - task:
28
+ name: Automatic Speech Recognition
29
+ type: automatic-speech-recognition
30
+ dataset:
31
+ name: Robust Speech Event - Dev Data
32
+ type: speech-recognition-community-v2/dev_data
33
+ args: fr
34
+ metrics:
35
+ - name: Test WER
36
+ type: wer
37
+ value: 60.25
38
+ - name: Test CER
39
+ type: cer
40
+ value: 15.68
41
+ ---
42
+
43
+ ## Model description
44
+
45
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - FR dataset.
46
+
47
+ ## Training procedure
48
+
49
+ ### Training hyperparameters
50
+
51
+ The following hyperparameters were used during training:
52
+ - learning_rate: 7.5e-05
53
+ - train_batch_size: 16
54
+ - eval_batch_size: 16
55
+ - seed: 42
56
+ - gradient_accumulation_steps: 8
57
+ - total_train_batch_size: 128
58
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
59
+ - lr_scheduler_type: linear
60
+ - lr_scheduler_warmup_steps: 2000
61
+ - num_epochs: 4.0
62
+ - mixed_precision_training: Native AMP
63
+
64
+ ### Training results
65
+
66
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
67
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|
68
+ | 0.9827 | 0.29 | 1000 | inf | 0.2937 |
69
+ | 1.0203 | 0.57 | 2000 | inf | 0.2711 |
70
+ | 1.0048 | 0.86 | 3000 | inf | 0.2620 |
71
+ | 0.9858 | 1.15 | 4000 | inf | 0.2522 |
72
+ | 0.9709 | 1.43 | 5000 | inf | 0.2365 |
73
+ | 0.9347 | 1.72 | 6000 | inf | 0.2332 |
74
+ | 0.9256 | 2.01 | 7000 | inf | 0.2261 |
75
+ | 0.8936 | 2.29 | 8000 | inf | 0.2203 |
76
+ | 0.877 | 2.58 | 9000 | inf | 0.2096 |
77
+ | 0.8393 | 2.87 | 10000 | inf | 0.2017 |
78
+ | 0.8156 | 3.15 | 11000 | inf | 0.1936 |
79
+ | 0.8015 | 3.44 | 12000 | inf | 0.1880 |
80
+ | 0.774 | 3.73 | 13000 | inf | 0.1834 |
81
+
82
+ It achieves the best result on the validation set on STEP 13000:
83
+ - Wer: 0.1834
84
+
85
+ Some problem occurs when calculating the validation loss.
86
+
87
+ ### Framework versions
88
+
89
+ - Transformers 4.17.0.dev0
90
+ - Pytorch 1.10.2+cu102
91
+ - Datasets 1.18.3.dev0
92
+ - Tokenizers 0.11.0
93
+
94
+ ### Evaluation Commands
95
+ 1. To evaluate on `mozilla-foundation/common_voice_8` with split `test`
96
+
97
+ ```bash
98
+ python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset mozilla-foundation/common_voice_8_0 --config fr --split test
99
+ ```
100
+
101
+ 2. To evaluate on `speech-recognition-community-v2/dev_data`
102
+
103
+ ```bash
104
+ python eval.py --model_id Plim/xls-r-1b-cv_8-fr --dataset speech-recognition-community-v2/dev_data --config fr --split validation --chunk_length_s 5.0 --stride_length_s 1.0
105
+ ```
.ipynb_checkpoints/create_lm-checkpoint.ipynb ADDED
@@ -0,0 +1,309 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 10,
6
+ "id": "7b5f7142",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "import transformers\n",
11
+ "from datasets import load_dataset\n",
12
+ "import re"
13
+ ]
14
+ },
15
+ {
16
+ "cell_type": "code",
17
+ "execution_count": 11,
18
+ "id": "4ad6422f",
19
+ "metadata": {},
20
+ "outputs": [],
21
+ "source": [
22
+ "username = \"Plim\" # change to your username\n",
23
+ "target_lang = \"fr\""
24
+ ]
25
+ },
26
+ {
27
+ "cell_type": "code",
28
+ "execution_count": 4,
29
+ "id": "37b2c1d6",
30
+ "metadata": {},
31
+ "outputs": [
32
+ {
33
+ "data": {
34
+ "application/vnd.jupyter.widget-view+json": {
35
+ "model_id": "f230feb459c441a9a11e53b867e8914a",
36
+ "version_major": 2,
37
+ "version_minor": 0
38
+ },
39
+ "text/plain": [
40
+ "Downloading: 0%| | 0.00/2.60k [00:00<?, ?B/s]"
41
+ ]
42
+ },
43
+ "metadata": {},
44
+ "output_type": "display_data"
45
+ },
46
+ {
47
+ "data": {
48
+ "application/vnd.jupyter.widget-view+json": {
49
+ "model_id": "a4a8fa35d48f4a6db8072baed6b2389b",
50
+ "version_major": 2,
51
+ "version_minor": 0
52
+ },
53
+ "text/plain": [
54
+ "Downloading: 0%| | 0.00/29.6k [00:00<?, ?B/s]"
55
+ ]
56
+ },
57
+ "metadata": {},
58
+ "output_type": "display_data"
59
+ },
60
+ {
61
+ "name": "stderr",
62
+ "output_type": "stream",
63
+ "text": [
64
+ "Using custom data configuration en-fr-lang1=en,lang2=fr\n"
65
+ ]
66
+ },
67
+ {
68
+ "name": "stdout",
69
+ "output_type": "stream",
70
+ "text": [
71
+ "Downloading and preparing dataset europarl_bilingual/en-fr (download: 278.07 MiB, generated: 643.66 MiB, post-processed: Unknown size, total: 921.72 MiB) to /workspace/.cache/huggingface/datasets/europarl_bilingual/en-fr-lang1=en,lang2=fr/8.0.0/2ab0200e7729616bfd4a4df6bfb29b31746ceb5a59f8c75c02ca35e1ebead950...\n"
72
+ ]
73
+ },
74
+ {
75
+ "data": {
76
+ "application/vnd.jupyter.widget-view+json": {
77
+ "model_id": "aa00b5d6dc154449861dddcf9f0d2fc8",
78
+ "version_major": 2,
79
+ "version_minor": 0
80
+ },
81
+ "text/plain": [
82
+ "Downloading: 0%| | 0.00/142M [00:00<?, ?B/s]"
83
+ ]
84
+ },
85
+ "metadata": {},
86
+ "output_type": "display_data"
87
+ },
88
+ {
89
+ "data": {
90
+ "application/vnd.jupyter.widget-view+json": {
91
+ "model_id": "563096fc78454333b5ae23e87a7e3469",
92
+ "version_major": 2,
93
+ "version_minor": 0
94
+ },
95
+ "text/plain": [
96
+ "Downloading: 0%| | 0.00/140M [00:00<?, ?B/s]"
97
+ ]
98
+ },
99
+ "metadata": {},
100
+ "output_type": "display_data"
101
+ },
102
+ {
103
+ "data": {
104
+ "application/vnd.jupyter.widget-view+json": {
105
+ "model_id": "eba05e5151b34505b9a43e383cb6cfe0",
106
+ "version_major": 2,
107
+ "version_minor": 0
108
+ },
109
+ "text/plain": [
110
+ "Downloading: 0%| | 0.00/9.30M [00:00<?, ?B/s]"
111
+ ]
112
+ },
113
+ "metadata": {},
114
+ "output_type": "display_data"
115
+ },
116
+ {
117
+ "data": {
118
+ "application/vnd.jupyter.widget-view+json": {
119
+ "model_id": "",
120
+ "version_major": 2,
121
+ "version_minor": 0
122
+ },
123
+ "text/plain": [
124
+ "0 examples [00:00, ? examples/s]"
125
+ ]
126
+ },
127
+ "metadata": {},
128
+ "output_type": "display_data"
129
+ },
130
+ {
131
+ "name": "stdout",
132
+ "output_type": "stream",
133
+ "text": [
134
+ "Dataset europarl_bilingual downloaded and prepared to /workspace/.cache/huggingface/datasets/europarl_bilingual/en-fr-lang1=en,lang2=fr/8.0.0/2ab0200e7729616bfd4a4df6bfb29b31746ceb5a59f8c75c02ca35e1ebead950. Subsequent calls will reuse this data.\n"
135
+ ]
136
+ }
137
+ ],
138
+ "source": [
139
+ "dataset = load_dataset(\"europarl_bilingual\", lang1=\"en\", lang2=target_lang, split=\"train\")"
140
+ ]
141
+ },
142
+ {
143
+ "cell_type": "code",
144
+ "execution_count": 12,
145
+ "id": "81259294",
146
+ "metadata": {},
147
+ "outputs": [],
148
+ "source": [
149
+ "def extract_text(batch):\n",
150
+ " target_lang = \"fr\"\n",
151
+ " chars_to_ignore_regex = '[^a-zàâäçéèêëîïôöùûüÿ\\'’ ]'\n",
152
+ " text = batch[\"translation\"][target_lang]\n",
153
+ " batch[\"text\"] = re.sub(chars_to_ignore_regex, \"\", text.lower()).replace('’', \"'\")\n",
154
+ " return batch"
155
+ ]
156
+ },
157
+ {
158
+ "cell_type": "code",
159
+ "execution_count": 13,
160
+ "id": "2dec7b80",
161
+ "metadata": {},
162
+ "outputs": [
163
+ {
164
+ "data": {
165
+ "application/vnd.jupyter.widget-view+json": {
166
+ "model_id": "00d998de52544f6c8750c53bc0c85d66",
167
+ "version_major": 2,
168
+ "version_minor": 0
169
+ },
170
+ "text/plain": [
171
+ "0ex [00:00, ?ex/s]"
172
+ ]
173
+ },
174
+ "metadata": {},
175
+ "output_type": "display_data"
176
+ }
177
+ ],
178
+ "source": [
179
+ "dataset = dataset.map(extract_text, remove_columns=dataset.column_names)"
180
+ ]
181
+ },
182
+ {
183
+ "cell_type": "code",
184
+ "execution_count": 14,
185
+ "id": "c6feaf74",
186
+ "metadata": {},
187
+ "outputs": [
188
+ {
189
+ "data": {
190
+ "application/vnd.jupyter.widget-view+json": {
191
+ "model_id": "461a219cdb6d42b2b890ec028c336e7f",
192
+ "version_major": 2,
193
+ "version_minor": 0
194
+ },
195
+ "text/plain": [
196
+ "Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]"
197
+ ]
198
+ },
199
+ "metadata": {},
200
+ "output_type": "display_data"
201
+ }
202
+ ],
203
+ "source": [
204
+ "dataset.push_to_hub(f\"{target_lang}_corpora_parliament_processed\", split=\"train\")"
205
+ ]
206
+ },
207
+ {
208
+ "cell_type": "code",
209
+ "execution_count": 15,
210
+ "id": "b0e6ae25",
211
+ "metadata": {},
212
+ "outputs": [],
213
+ "source": [
214
+ "with open(\"text.txt\", \"w\") as file:\n",
215
+ " file.write(\" \".join(dataset[\"text\"]))"
216
+ ]
217
+ },
218
+ {
219
+ "cell_type": "code",
220
+ "execution_count": 17,
221
+ "id": "f95596a5",
222
+ "metadata": {},
223
+ "outputs": [],
224
+ "source": [
225
+ "with open(\"5gram.arpa\", \"r\") as read_file, open(\"5gram_correct.arpa\", \"w\") as write_file:\n",
226
+ " has_added_eos = False\n",
227
+ " for line in read_file:\n",
228
+ " if not has_added_eos and \"ngram 1=\" in line:\n",
229
+ " count=line.strip().split(\"=\")[-1]\n",
230
+ " write_file.write(line.replace(f\"{count}\", f\"{int(count)+1}\"))\n",
231
+ " elif not has_added_eos and \"<s>\" in line:\n",
232
+ " write_file.write(line)\n",
233
+ " write_file.write(line.replace(\"<s>\", \"</s>\"))\n",
234
+ " has_added_eos = True\n",
235
+ " else:\n",
236
+ " write_file.write(line)"
237
+ ]
238
+ },
239
+ {
240
+ "cell_type": "code",
241
+ "execution_count": 1,
242
+ "id": "f6489f25",
243
+ "metadata": {},
244
+ "outputs": [
245
+ {
246
+ "name": "stderr",
247
+ "output_type": "stream",
248
+ "text": [
249
+ "file ./config.json not found\n"
250
+ ]
251
+ },
252
+ {
253
+ "ename": "OSError",
254
+ "evalue": "Can't load config for './'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './' is the correct path to a directory containing a config.json file",
255
+ "output_type": "error",
256
+ "traceback": [
257
+ "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
258
+ "\u001b[0;31mOSError\u001b[0m Traceback (most recent call last)",
259
+ "File \u001b[0;32m/opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py:585\u001b[0m, in \u001b[0;36mPretrainedConfig._get_config_dict\u001b[0;34m(cls, pretrained_model_name_or_path, **kwargs)\u001b[0m\n\u001b[1;32m 583\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 584\u001b[0m \u001b[38;5;66;03m# Load from URL or cache if already cached\u001b[39;00m\n\u001b[0;32m--> 585\u001b[0m resolved_config_file \u001b[38;5;241m=\u001b[39m \u001b[43mcached_path\u001b[49m\u001b[43m(\u001b[49m\n\u001b[1;32m 586\u001b[0m \u001b[43m \u001b[49m\u001b[43mconfig_file\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 587\u001b[0m \u001b[43m \u001b[49m\u001b[43mcache_dir\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mcache_dir\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 588\u001b[0m \u001b[43m \u001b[49m\u001b[43mforce_download\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mforce_download\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 589\u001b[0m \u001b[43m \u001b[49m\u001b[43mproxies\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mproxies\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 590\u001b[0m \u001b[43m \u001b[49m\u001b[43mresume_download\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mresume_download\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 591\u001b[0m \u001b[43m \u001b[49m\u001b[43mlocal_files_only\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43mlocal_files_only\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 592\u001b[0m \u001b[43m \u001b[49m\u001b[43muse_auth_token\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43muse_auth_token\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 593\u001b[0m \u001b[43m \u001b[49m\u001b[43muser_agent\u001b[49m\u001b[38;5;241;43m=\u001b[39;49m\u001b[43muser_agent\u001b[49m\u001b[43m,\u001b[49m\n\u001b[1;32m 594\u001b[0m \u001b[43m \u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 596\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m RepositoryNotFoundError \u001b[38;5;28;01mas\u001b[39;00m err:\n",
260
+ "File \u001b[0;32m/opt/conda/lib/python3.8/site-packages/transformers/file_utils.py:1861\u001b[0m, in \u001b[0;36mcached_path\u001b[0;34m(url_or_filename, cache_dir, force_download, proxies, resume_download, user_agent, extract_compressed_file, force_extract, use_auth_token, local_files_only)\u001b[0m\n\u001b[1;32m 1859\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m urlparse(url_or_filename)\u001b[38;5;241m.\u001b[39mscheme \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[1;32m 1860\u001b[0m \u001b[38;5;66;03m# File, but it doesn't exist.\u001b[39;00m\n\u001b[0;32m-> 1861\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mEnvironmentError\u001b[39;00m(\u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mfile \u001b[39m\u001b[38;5;132;01m{\u001b[39;00murl_or_filename\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m not found\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m 1862\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m 1863\u001b[0m \u001b[38;5;66;03m# Something unknown\u001b[39;00m\n",
261
+ "\u001b[0;31mOSError\u001b[0m: file ./config.json not found",
262
+ "\nDuring handling of the above exception, another exception occurred:\n",
263
+ "\u001b[0;31mOSError\u001b[0m Traceback (most recent call last)",
264
+ "Input \u001b[0;32mIn [1]\u001b[0m, in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m 1\u001b[0m \u001b[38;5;28;01mfrom\u001b[39;00m \u001b[38;5;21;01mtransformers\u001b[39;00m \u001b[38;5;28;01mimport\u001b[39;00m AutoProcessor\n\u001b[0;32m----> 3\u001b[0m processor \u001b[38;5;241m=\u001b[39m \u001b[43mAutoProcessor\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_pretrained\u001b[49m\u001b[43m(\u001b[49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[38;5;124;43m./\u001b[39;49m\u001b[38;5;124;43m\"\u001b[39;49m\u001b[43m)\u001b[49m\n",
265
+ "File \u001b[0;32m/opt/conda/lib/python3.8/site-packages/transformers/models/auto/processing_auto.py:178\u001b[0m, in \u001b[0;36mAutoProcessor.from_pretrained\u001b[0;34m(cls, pretrained_model_name_or_path, **kwargs)\u001b[0m\n\u001b[1;32m 176\u001b[0m \u001b[38;5;66;03m# Otherwise, load config, if it can be loaded.\u001b[39;00m\n\u001b[1;32m 177\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(config, PretrainedConfig):\n\u001b[0;32m--> 178\u001b[0m config \u001b[38;5;241m=\u001b[39m \u001b[43mAutoConfig\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mfrom_pretrained\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpretrained_model_name_or_path\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 180\u001b[0m model_type \u001b[38;5;241m=\u001b[39m config_class_to_model_type(\u001b[38;5;28mtype\u001b[39m(config)\u001b[38;5;241m.\u001b[39m\u001b[38;5;18m__name__\u001b[39m)\n\u001b[1;32m 182\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mgetattr\u001b[39m(config, \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mprocessor_class\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;28;01mNone\u001b[39;00m) \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n",
266
+ "File \u001b[0;32m/opt/conda/lib/python3.8/site-packages/transformers/models/auto/configuration_auto.py:617\u001b[0m, in \u001b[0;36mAutoConfig.from_pretrained\u001b[0;34m(cls, pretrained_model_name_or_path, **kwargs)\u001b[0m\n\u001b[1;32m 615\u001b[0m kwargs[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mname_or_path\u001b[39m\u001b[38;5;124m\"\u001b[39m] \u001b[38;5;241m=\u001b[39m pretrained_model_name_or_path\n\u001b[1;32m 616\u001b[0m trust_remote_code \u001b[38;5;241m=\u001b[39m kwargs\u001b[38;5;241m.\u001b[39mpop(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mtrust_remote_code\u001b[39m\u001b[38;5;124m\"\u001b[39m, \u001b[38;5;28;01mFalse\u001b[39;00m)\n\u001b[0;32m--> 617\u001b[0m config_dict, _ \u001b[38;5;241m=\u001b[39m \u001b[43mPretrainedConfig\u001b[49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43mget_config_dict\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpretrained_model_name_or_path\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 618\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mauto_map\u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;129;01min\u001b[39;00m config_dict \u001b[38;5;129;01mand\u001b[39;00m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mAutoConfig\u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;129;01min\u001b[39;00m config_dict[\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mauto_map\u001b[39m\u001b[38;5;124m\"\u001b[39m]:\n\u001b[1;32m 619\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m trust_remote_code:\n",
267
+ "File \u001b[0;32m/opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py:537\u001b[0m, in \u001b[0;36mPretrainedConfig.get_config_dict\u001b[0;34m(cls, pretrained_model_name_or_path, **kwargs)\u001b[0m\n\u001b[1;32m 535\u001b[0m original_kwargs \u001b[38;5;241m=\u001b[39m copy\u001b[38;5;241m.\u001b[39mdeepcopy(kwargs)\n\u001b[1;32m 536\u001b[0m \u001b[38;5;66;03m# Get config dict associated with the base config file\u001b[39;00m\n\u001b[0;32m--> 537\u001b[0m config_dict, kwargs \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;43mcls\u001b[39;49m\u001b[38;5;241;43m.\u001b[39;49m\u001b[43m_get_config_dict\u001b[49m\u001b[43m(\u001b[49m\u001b[43mpretrained_model_name_or_path\u001b[49m\u001b[43m,\u001b[49m\u001b[43m \u001b[49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[38;5;241;43m*\u001b[39;49m\u001b[43mkwargs\u001b[49m\u001b[43m)\u001b[49m\n\u001b[1;32m 539\u001b[0m \u001b[38;5;66;03m# That config file may point us toward another config file to use.\u001b[39;00m\n\u001b[1;32m 540\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mconfiguration_files\u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;129;01min\u001b[39;00m config_dict:\n",
268
+ "File \u001b[0;32m/opt/conda/lib/python3.8/site-packages/transformers/configuration_utils.py:626\u001b[0m, in \u001b[0;36mPretrainedConfig._get_config_dict\u001b[0;34m(cls, pretrained_model_name_or_path, **kwargs)\u001b[0m\n\u001b[1;32m 624\u001b[0m \u001b[38;5;28;01mexcept\u001b[39;00m \u001b[38;5;167;01mEnvironmentError\u001b[39;00m \u001b[38;5;28;01mas\u001b[39;00m err:\n\u001b[1;32m 625\u001b[0m logger\u001b[38;5;241m.\u001b[39merror(err)\n\u001b[0;32m--> 626\u001b[0m \u001b[38;5;28;01mraise\u001b[39;00m \u001b[38;5;167;01mEnvironmentError\u001b[39;00m(\n\u001b[1;32m 627\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mCan\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt load config for \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpretrained_model_name_or_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m. If you were trying to load it from \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 628\u001b[0m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mhttps://huggingface.co/models\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m, make sure you don\u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mt have a local directory with the same name. \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 629\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mOtherwise, make sure \u001b[39m\u001b[38;5;124m'\u001b[39m\u001b[38;5;132;01m{\u001b[39;00mpretrained_model_name_or_path\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m'\u001b[39m\u001b[38;5;124m is the correct path to a directory \u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 630\u001b[0m \u001b[38;5;124mf\u001b[39m\u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcontaining a \u001b[39m\u001b[38;5;132;01m{\u001b[39;00mconfiguration_file\u001b[38;5;132;01m}\u001b[39;00m\u001b[38;5;124m file\u001b[39m\u001b[38;5;124m\"\u001b[39m\n\u001b[1;32m 631\u001b[0m )\n\u001b[1;32m 633\u001b[0m \u001b[38;5;28;01mtry\u001b[39;00m:\n\u001b[1;32m 634\u001b[0m \u001b[38;5;66;03m# Load config dict\u001b[39;00m\n\u001b[1;32m 635\u001b[0m config_dict \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mcls\u001b[39m\u001b[38;5;241m.\u001b[39m_dict_from_json_file(resolved_config_file)\n",
269
+ "\u001b[0;31mOSError\u001b[0m: Can't load config for './'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure './' is the correct path to a directory containing a config.json file"
270
+ ]
271
+ }
272
+ ],
273
+ "source": [
274
+ "from transformers import AutoProcessor\n",
275
+ "\n",
276
+ "processor = AutoProcessor.from_pretrained(\"./\")"
277
+ ]
278
+ },
279
+ {
280
+ "cell_type": "code",
281
+ "execution_count": null,
282
+ "id": "ab24f645",
283
+ "metadata": {},
284
+ "outputs": [],
285
+ "source": []
286
+ }
287
+ ],
288
+ "metadata": {
289
+ "kernelspec": {
290
+ "display_name": "Python 3 (ipykernel)",
291
+ "language": "python",
292
+ "name": "python3"
293
+ },
294
+ "language_info": {
295
+ "codemirror_mode": {
296
+ "name": "ipython",
297
+ "version": 3
298
+ },
299
+ "file_extension": ".py",
300
+ "mimetype": "text/x-python",
301
+ "name": "python",
302
+ "nbconvert_exporter": "python",
303
+ "pygments_lexer": "ipython3",
304
+ "version": "3.8.8"
305
+ }
306
+ },
307
+ "nbformat": 4,
308
+ "nbformat_minor": 5
309
+ }
.ipynb_checkpoints/log_mozilla-foundation_common_voice_8_0_fr_test_predictions-checkpoint.txt ADDED
The diff for this file is too large to render. See raw diff
.ipynb_checkpoints/log_mozilla-foundation_common_voice_8_0_fr_test_targets-checkpoint.txt ADDED
The diff for this file is too large to render. See raw diff
.ipynb_checkpoints/log_speech-recognition-community-v2_dev_data_fr_validation_predictions-checkpoint.txt ADDED
The diff for this file is too large to render. See raw diff
.ipynb_checkpoints/log_speech-recognition-community-v2_dev_data_fr_validation_targets-checkpoint.txt ADDED
The diff for this file is too large to render. See raw diff
.ipynb_checkpoints/mozilla-foundation_common_voice_8_0_fr_test_eval_results-checkpoint.txt ADDED
@@ -0,0 +1,2 @@
 
 
1
+ WER: 0.18333515105245937
2
+ CER: 0.05606368028384753
.ipynb_checkpoints/preprocessor_config-checkpoint.json ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "feature_extractor_type": "Wav2Vec2FeatureExtractor",
4
+ "feature_size": 1,
5
+ "padding_side": "right",
6
+ "padding_value": 0,
7
+ "processor_class": "Wav2Vec2ProcessorWithLM",
8
+ "return_attention_mask": true,
9
+ "sampling_rate": 16000
10
+ }
.ipynb_checkpoints/run-checkpoint.sh CHANGED
@@ -20,8 +20,8 @@ python run_speech_recognition_ctc.py \
20
  --mask_feature_prob="0.25" \
21
  --mask_time_length="10" \
22
  --mask_time_prob="0.75" \
23
- --model_name_or_path="facebook/wav2vec2-xls-r-1b" \
24
- --num_train_epochs="4.0" \
25
  --output_dir="./" \
26
  --overwrite_output_dir \
27
  --per_device_train_batch_size="16" \
20
  --mask_feature_prob="0.25" \
21
  --mask_time_length="10" \
22
  --mask_time_prob="0.75" \
23
+ --model_name_or_path="./checkpoint-13000" \
24
+ --num_train_epochs="6.0" \
25
  --output_dir="./" \
26
  --overwrite_output_dir \
27
  --per_device_train_batch_size="16" \
alphabet.json ADDED
@@ -0,0 +1 @@
 
1
+ {"labels": [" ", "'", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l", "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z", "\u00e0", "\u00e2", "\u00e4", "\u00e7", "\u00e8", "\u00e9", "\u00ea", "\u00eb", "\u00ee", "\u00ef", "\u00f4", "\u00f6", "\u00f9", "\u00fb", "\u00fc", "\u00ff", "\u2047", ""], "is_bpe": false}
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "facebook/wav2vec2-xls-r-1b",
3
  "activation_dropout": 0.1,
4
  "adapter_kernel_size": 3,
5
  "adapter_stride": 2,
1
  {
2
+ "_name_or_path": "./checkpoint-13000",
3
  "activation_dropout": 0.1,
4
  "adapter_kernel_size": 3,
5
  "adapter_stride": 2,
create_lm.ipynb ADDED
@@ -0,0 +1,344 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "code",
5
+ "execution_count": 6,
6
+ "id": "d354f2ac",
7
+ "metadata": {},
8
+ "outputs": [],
9
+ "source": [
10
+ "import transformers\n",
11
+ "from datasets import load_dataset\n",
12
+ "import re"
13
+ ]
14
+ },
15
+ {
16
+ "cell_type": "code",
17
+ "execution_count": 11,
18
+ "id": "fe33d468",
19
+ "metadata": {},
20
+ "outputs": [],
21
+ "source": [
22
+ "username = \"Plim\" # change to your username\n",
23
+ "target_lang = \"fr\""
24
+ ]
25
+ },
26
+ {
27
+ "cell_type": "code",
28
+ "execution_count": 4,
29
+ "id": "f84ba325",
30
+ "metadata": {},
31
+ "outputs": [
32
+ {
33
+ "data": {
34
+ "application/vnd.jupyter.widget-view+json": {
35
+ "model_id": "f230feb459c441a9a11e53b867e8914a",
36
+ "version_major": 2,
37
+ "version_minor": 0
38
+ },
39
+ "text/plain": [
40
+ "Downloading: 0%| | 0.00/2.60k [00:00<?, ?B/s]"
41
+ ]
42
+ },
43
+ "metadata": {},
44
+ "output_type": "display_data"
45
+ },
46
+ {
47
+ "data": {
48
+ "application/vnd.jupyter.widget-view+json": {
49
+ "model_id": "a4a8fa35d48f4a6db8072baed6b2389b",
50
+ "version_major": 2,
51
+ "version_minor": 0
52
+ },
53
+ "text/plain": [
54
+ "Downloading: 0%| | 0.00/29.6k [00:00<?, ?B/s]"
55
+ ]
56
+ },
57
+ "metadata": {},
58
+ "output_type": "display_data"
59
+ },
60
+ {
61
+ "name": "stderr",
62
+ "output_type": "stream",
63
+ "text": [
64
+ "Using custom data configuration en-fr-lang1=en,lang2=fr\n"
65
+ ]
66
+ },
67
+ {
68
+ "name": "stdout",
69
+ "output_type": "stream",
70
+ "text": [
71
+ "Downloading and preparing dataset europarl_bilingual/en-fr (download: 278.07 MiB, generated: 643.66 MiB, post-processed: Unknown size, total: 921.72 MiB) to /workspace/.cache/huggingface/datasets/europarl_bilingual/en-fr-lang1=en,lang2=fr/8.0.0/2ab0200e7729616bfd4a4df6bfb29b31746ceb5a59f8c75c02ca35e1ebead950...\n"
72
+ ]
73
+ },
74
+ {
75
+ "data": {
76
+ "application/vnd.jupyter.widget-view+json": {
77
+ "model_id": "aa00b5d6dc154449861dddcf9f0d2fc8",
78
+ "version_major": 2,
79
+ "version_minor": 0
80
+ },
81
+ "text/plain": [
82
+ "Downloading: 0%| | 0.00/142M [00:00<?, ?B/s]"
83
+ ]
84
+ },
85
+ "metadata": {},
86
+ "output_type": "display_data"
87
+ },
88
+ {
89
+ "data": {
90
+ "application/vnd.jupyter.widget-view+json": {
91
+ "model_id": "563096fc78454333b5ae23e87a7e3469",
92
+ "version_major": 2,
93
+ "version_minor": 0
94
+ },
95
+ "text/plain": [
96
+ "Downloading: 0%| | 0.00/140M [00:00<?, ?B/s]"
97
+ ]
98
+ },
99
+ "metadata": {},
100
+ "output_type": "display_data"
101
+ },
102
+ {
103
+ "data": {
104
+ "application/vnd.jupyter.widget-view+json": {
105
+ "model_id": "eba05e5151b34505b9a43e383cb6cfe0",
106
+ "version_major": 2,
107
+ "version_minor": 0
108
+ },
109
+ "text/plain": [
110
+ "Downloading: 0%| | 0.00/9.30M [00:00<?, ?B/s]"
111
+ ]
112
+ },
113
+ "metadata": {},
114
+ "output_type": "display_data"
115
+ },
116
+ {
117
+ "data": {
118
+ "application/vnd.jupyter.widget-view+json": {
119
+ "model_id": "",
120
+ "version_major": 2,
121
+ "version_minor": 0
122
+ },
123
+ "text/plain": [
124
+ "0 examples [00:00, ? examples/s]"
125
+ ]
126
+ },
127
+ "metadata": {},
128
+ "output_type": "display_data"
129
+ },
130
+ {
131
+ "name": "stdout",
132
+ "output_type": "stream",
133
+ "text": [
134
+ "Dataset europarl_bilingual downloaded and prepared to /workspace/.cache/huggingface/datasets/europarl_bilingual/en-fr-lang1=en,lang2=fr/8.0.0/2ab0200e7729616bfd4a4df6bfb29b31746ceb5a59f8c75c02ca35e1ebead950. Subsequent calls will reuse this data.\n"
135
+ ]
136
+ }
137
+ ],
138
+ "source": [
139
+ "dataset = load_dataset(\"europarl_bilingual\", lang1=\"en\", lang2=target_lang, split=\"train\")"
140
+ ]
141
+ },
142
+ {
143
+ "cell_type": "code",
144
+ "execution_count": 12,
145
+ "id": "c26261e9",
146
+ "metadata": {},
147
+ "outputs": [],
148
+ "source": [
149
+ "def extract_text(batch):\n",
150
+ " target_lang = \"fr\"\n",
151
+ " chars_to_ignore_regex = '[^a-zàâäçéèêëîïôöùûüÿ\\'’ ]'\n",
152
+ " text = batch[\"translation\"][target_lang]\n",
153
+ " batch[\"text\"] = re.sub(chars_to_ignore_regex, \"\", text.lower()).replace('’', \"'\")\n",
154
+ " return batch"
155
+ ]
156
+ },
157
+ {
158
+ "cell_type": "code",
159
+ "execution_count": 13,
160
+ "id": "5434c0b7",
161
+ "metadata": {},
162
+ "outputs": [
163
+ {
164
+ "data": {
165
+ "application/vnd.jupyter.widget-view+json": {
166
+ "model_id": "00d998de52544f6c8750c53bc0c85d66",
167
+ "version_major": 2,
168
+ "version_minor": 0
169
+ },
170
+ "text/plain": [
171
+ "0ex [00:00, ?ex/s]"
172
+ ]
173
+ },
174
+ "metadata": {},
175
+ "output_type": "display_data"
176
+ }
177
+ ],
178
+ "source": [
179
+ "dataset = dataset.map(extract_text, remove_columns=dataset.column_names)"
180
+ ]
181
+ },
182
+ {
183
+ "cell_type": "code",
184
+ "execution_count": 14,
185
+ "id": "e1c780b8",
186
+ "metadata": {},
187
+ "outputs": [
188
+ {
189
+ "data": {
190
+ "application/vnd.jupyter.widget-view+json": {
191
+ "model_id": "461a219cdb6d42b2b890ec028c336e7f",
192
+ "version_major": 2,
193
+ "version_minor": 0
194
+ },
195
+ "text/plain": [
196
+ "Pushing dataset shards to the dataset hub: 0%| | 0/1 [00:00<?, ?it/s]"
197
+ ]
198
+ },
199
+ "metadata": {},
200
+ "output_type": "display_data"
201
+ }
202
+ ],
203
+ "source": [
204
+ "dataset.push_to_hub(f\"{target_lang}_corpora_parliament_processed\", split=\"train\")"
205
+ ]
206
+ },
207
+ {
208
+ "cell_type": "code",
209
+ "execution_count": 15,
210
+ "id": "41c0ab30",
211
+ "metadata": {},
212
+ "outputs": [],
213
+ "source": [
214
+ "with open(\"text.txt\", \"w\") as file:\n",
215
+ " file.write(\" \".join(dataset[\"text\"]))"
216
+ ]
217
+ },
218
+ {
219
+ "cell_type": "code",
220
+ "execution_count": 7,
221
+ "id": "4d6bfb67",
222
+ "metadata": {},
223
+ "outputs": [],
224
+ "source": [
225
+ "with open(\"language_model/5gram.arpa\", \"r\") as read_file, open(\"language_model/5gram_correct.arpa\", \"w\") as write_file:\n",
226
+ " has_added_eos = False\n",
227
+ " for line in read_file:\n",
228
+ " if not has_added_eos and \"ngram 1=\" in line:\n",
229
+ " count=line.strip().split(\"=\")[-1]\n",
230
+ " write_file.write(line.replace(f\"{count}\", f\"{int(count)+1}\"))\n",
231
+ " elif not has_added_eos and \"<s>\" in line:\n",
232
+ " write_file.write(line)\n",
233
+ " write_file.write(line.replace(\"<s>\", \"</s>\"))\n",
234
+ " has_added_eos = True\n",
235
+ " else:\n",
236
+ " write_file.write(line)"
237
+ ]
238
+ },
239
+ {
240
+ "cell_type": "code",
241
+ "execution_count": 8,
242
+ "id": "3407085c",
243
+ "metadata": {},
244
+ "outputs": [],
245
+ "source": [
246
+ "from transformers import AutoProcessor\n",
247
+ "\n",
248
+ "processor = AutoProcessor.from_pretrained(\"./\")"
249
+ ]
250
+ },
251
+ {
252
+ "cell_type": "code",
253
+ "execution_count": 9,
254
+ "id": "5a60df92",
255
+ "metadata": {},
256
+ "outputs": [],
257
+ "source": [
258
+ "vocab_dict = processor.tokenizer.get_vocab()\n",
259
+ "sorted_vocab_dict = {k.lower(): v for k, v in sorted(vocab_dict.items(), key=lambda item: item[1])}"
260
+ ]
261
+ },
262
+ {
263
+ "cell_type": "code",
264
+ "execution_count": 10,
265
+ "id": "cd1a94ea",
266
+ "metadata": {},
267
+ "outputs": [
268
+ {
269
+ "name": "stderr",
270
+ "output_type": "stream",
271
+ "text": [
272
+ "Loading the LM will be faster if you build a binary file.\n",
273
+ "Reading /workspace/xls-r-1b-cv_8-fr/language_model/5gram_correct.arpa\n",
274
+ "----5---10---15---20---25---30---35---40---45---50---55---60---65---70---75---80---85---90---95--100\n",
275
+ "****************************************************************************************************\n"
276
+ ]
277
+ }
278
+ ],
279
+ "source": [
280
+ "from pyctcdecode import build_ctcdecoder\n",
281
+ "\n",
282
+ "decoder = build_ctcdecoder(\n",
283
+ " labels=list(sorted_vocab_dict.keys()),\n",
284
+ " kenlm_model_path=\"language_model/5gram_correct.arpa\",\n",
285
+ ")"
286
+ ]
287
+ },
288
+ {
289
+ "cell_type": "code",
290
+ "execution_count": 11,
291
+ "id": "e627079a",
292
+ "metadata": {},
293
+ "outputs": [],
294
+ "source": [
295
+ "from transformers import Wav2Vec2ProcessorWithLM\n",
296
+ "\n",
297
+ "processor_with_lm = Wav2Vec2ProcessorWithLM(\n",
298
+ " feature_extractor=processor.feature_extractor,\n",
299
+ " tokenizer=processor.tokenizer,\n",
300
+ " decoder=decoder\n",
301
+ ")"
302
+ ]
303
+ },
304
+ {
305
+ "cell_type": "code",
306
+ "execution_count": 18,
307
+ "id": "bc665f62",
308
+ "metadata": {},
309
+ "outputs": [],
310
+ "source": [
311
+ "processor_with_lm.save_pretrained(\"Plim/xls-r-1b-cv_8-fr\")"
312
+ ]
313
+ },
314
+ {
315
+ "cell_type": "code",
316
+ "execution_count": null,
317
+ "id": "7bcbb30b",
318
+ "metadata": {},
319
+ "outputs": [],
320
+ "source": []
321
+ }
322
+ ],
323
+ "metadata": {
324
+ "kernelspec": {
325
+ "display_name": "Python 3 (ipykernel)",
326
+ "language": "python",
327
+ "name": "python3"
328
+ },
329
+ "language_info": {
330
+ "codemirror_mode": {
331
+ "name": "ipython",
332
+ "version": 3
333
+ },
334
+ "file_extension": ".py",
335
+ "mimetype": "text/x-python",
336
+ "name": "python",
337
+ "nbconvert_exporter": "python",
338
+ "pygments_lexer": "ipython3",
339
+ "version": "3.8.8"
340
+ }
341
+ },
342
+ "nbformat": 4,
343
+ "nbformat_minor": 5
344
+ }
keep_model/pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4a7ac9a4075231a9b1f2ef054fe1161fdf7235b6c7bd018f7505d44da3332960
3
+ size 3850548401
langague_model/5gram.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:726c0eaeadf24aa621faaddc6640ddc431a65f45e1b16ff0e6a9af565facd09f
3
+ size 2075344331
langague_model/attrs.json ADDED
@@ -0,0 +1 @@
 
1
+ {"alpha": 0.5, "beta": 1.5, "unk_score_offset": -10.0, "score_boundary": true}
langague_model/unigrams.txt ADDED
The diff for this file is too large to render. See raw diff
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:4a7ac9a4075231a9b1f2ef054fe1161fdf7235b6c7bd018f7505d44da3332960
3
  size 3850548401
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:581f4c8322c68fc308f22b68839669bf48755ac5706016bfe60c0234ba26e947
3
  size 3850548401
run.sh CHANGED
@@ -20,8 +20,8 @@ python run_speech_recognition_ctc.py \
20
  --mask_feature_prob="0.25" \
21
  --mask_time_length="10" \
22
  --mask_time_prob="0.75" \
23
- --model_name_or_path="facebook/wav2vec2-xls-r-1b" \
24
- --num_train_epochs="4.0" \
25
  --output_dir="./" \
26
  --overwrite_output_dir \
27
  --per_device_train_batch_size="16" \
20
  --mask_feature_prob="0.25" \
21
  --mask_time_length="10" \
22
  --mask_time_prob="0.75" \
23
+ --model_name_or_path="./checkpoint-13000" \
24
+ --num_train_epochs="6.0" \
25
  --output_dir="./" \
26
  --overwrite_output_dir \
27
  --per_device_train_batch_size="16" \
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:af168053e52049b75214ffe031549b7a5ed7c0e774b3f727dc7f6c7d61dd0f9c
3
  size 2991
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e48a427c7cb614f5fbcb1a4088c5c4abce496c5c8b09e2e526d99a2586ab2194
3
  size 2991
wandb/debug-internal.log CHANGED
@@ -1 +1 @@
1
- run-20220203_170643-2fkfdtzb/logs/debug-internal.log
1
+ run-20220206_201634-uhiy9e2t/logs/debug-internal.log
wandb/debug.log CHANGED
@@ -1 +1 @@
1
- run-20220203_170643-2fkfdtzb/logs/debug.log
1
+ run-20220206_201634-uhiy9e2t/logs/debug.log
wandb/latest-run CHANGED
@@ -1 +1 @@
1
- run-20220203_170643-2fkfdtzb
1
+ run-20220206_201634-uhiy9e2t
wandb/run-20220206_201634-uhiy9e2t/files/conda-environment.yaml ADDED
File without changes
wandb/run-20220206_201634-uhiy9e2t/files/config.yaml ADDED
The diff for this file is too large to render. See raw diff
wandb/run-20220206_201634-uhiy9e2t/files/output.log ADDED
@@ -0,0 +1,1491 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ 0%| | 0/20928 [00:00<?, ?it/s]
3
+
4
+
5
+
6
+
7
+
8
+
9
+
10
+
11
+
12
+
13
+
14
+
15
+
16
+
17
+
18
+
19
+
20
+
21
+
22
+
23
+
24
+
25
+
26
+
27
+
28
+
29
+
30
+
31
+
32
+
33
+
34
+
35
+
36
+
37
+
38
+
39
+
40
+
41
+
42
+
43
+
44
+
45
+
46
+
47
+
48
+
49
+
50
+
51
+
52
+
53
+
54
+
55
+
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+
64
+
65
+
66
+
67
+
68
+
69
+
70
+
71
+
72
+
73
+
74
+
75
+
76
+
77
+
78
+
79
+
80
+
81
+
82
+
83
+
84
+
85
+
86
+
87
+
88
+
89
+
90
+
91
+
92
+
93
+
94
+
95
+
96
+
97
+
98
+
99
+
100
+
101
+
102
+ 63%|███████████████████████████████████████████████████████████████████████████ | 13100/20928 [2:55:49<21:56:54, 10.09s/it]
103
+
104
+
105
+
106
+
107
+
108
+
109
+
110
+
111
+
112
+
113
+
114
+
115
+
116
+
117
+
118
+
119
+
120
+
121
+
122
+
123
+
124
+
125
+
126
+
127
+
128
+
129
+
130
+
131
+
132
+
133
+
134
+
135
+
136
+
137
+
138
+
139
+
140
+
141
+
142
+
143
+
144
+
145
+
146
+
147
+
148
+
149
+
150
+
151
+
152
+
153
+
154
+
155
+
156
+
157
+
158
+
159
+
160
+
161
+
162
+
163
+
164
+
165
+
166
+
167
+
168
+
169
+
170
+
171
+
172
+
173
+
174
+
175
+
176
+
177
+
178
+
179
+
180
+
181
+
182
+
183
+
184
+
185
+
186
+
187
+
188
+
189
+
190
+
191
+
192
+
193
+
194
+
195
+
196
+
197
+
198
+
199
+
200
+
201
+
202
+ 63%|███████████████████████████████████████████████████████████████████████████▋ | 13199/20928 [3:15:41<21:46:16, 10.14s/it]
203
+
204
+
205
+
206
+
207
+
208
+
209
+
210
+
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
219
+
220
+
221
+
222
+
223
+
224
+
225
+
226
+
227
+
228
+
229
+
230
+
231
+
232
+
233
+
234
+
235
+
236
+
237
+
238
+
239
+
240
+
241
+
242
+
243
+
244
+
245
+
246
+
247
+
248
+
249
+
250
+
251
+
252
+
253
+
254
+
255
+
256
+
257
+
258
+
259
+
260
+
261
+
262
+
263
+
264
+
265
+
266
+
267
+
268
+
269
+
270
+
271
+
272
+
273
+
274
+
275
+
276
+
277
+
278
+
279
+
280
+
281
+
282
+
283
+
284
+
285
+
286
+
287
+
288
+
289
+
290
+
291
+
292
+
293
+
294
+
295
+
296
+
297
+
298
+
299
+
300
+
301
+
302
+
303
+
304
+ 64%|████████████████████████████████████████████████████████████████████████████▎ | 13300/20928 [3:36:04<21:40:19, 10.23s/it]
305
+
306
+
307
+
308
+
309
+
310
+
311
+
312
+
313
+
314
+
315
+
316
+
317
+
318
+
319
+
320
+
321
+
322
+
323
+
324
+
325
+
326
+
327
+
328
+
329
+
330
+
331
+
332
+
333
+
334
+
335
+
336
+
337
+
338
+
339
+
340
+
341
+
342
+
343
+
344
+
345
+
346
+
347
+
348
+
349
+
350
+
351
+
352
+
353
+
354
+
355
+
356
+
357
+
358
+
359
+
360
+
361
+
362
+
363
+
364
+
365
+
366
+
367
+
368
+
369
+
370
+
371
+
372
+
373
+
374
+
375
+
376
+
377
+
378
+
379
+
380
+
381
+
382
+
383
+
384
+
385
+
386
+
387
+
388
+
389
+
390
+
391
+
392
+
393
+
394
+
395
+
396
+
397
+
398
+
399
+
400
+
401
+
402
+
403
+
404
+ 64%|████████████████████████████████████████████████████████████████████████████▊ | 13399/20928 [3:55:59<21:13:16, 10.15s/it]
405
+
406
+
407
+
408
+
409
+
410
+
411
+
412
+
413
+
414
+
415
+
416
+
417
+
418
+
419
+
420
+
421
+
422
+
423
+
424
+
425
+
426
+
427
+
428
+
429
+
430
+
431
+
432
+
433
+
434
+
435
+
436
+
437
+
438
+
439
+
440
+
441
+
442
+
443
+
444
+
445
+
446
+
447
+
448
+
449
+
450
+
451
+
452
+
453
+
454
+
455
+
456
+
457
+
458
+
459
+
460
+
461
+
462
+
463
+
464
+
465
+
466
+
467
+
468
+
469
+
470
+
471
+
472
+
473
+
474
+
475
+
476
+
477
+
478
+
479
+
480
+
481
+
482
+
483
+
484
+
485
+
486
+
487
+
488
+
489
+
490
+
491
+
492
+
493
+
494
+
495
+
496
+
497
+
498
+
499
+
500
+
501
+
502
+
503
+
504
+
505
+ 65%|█████████████████████████████████████████████████████████████████████████████▍ | 13499/20928 [4:16:06<20:57:54, 10.16s/it]
506
+
507
+
508
+
509
+
510
+
511
+
512
+
513
+
514
+
515
+
516
+
517
+
518
+
519
+
520
+
521
+
522
+
523
+
524
+
525
+
526
+
527
+
528
+
529
+
530
+
531
+
532
+
533
+
534
+
535
+
536
+
537
+
538
+
539
+
540
+
541
+
542
+
543
+
544
+
545
+
546
+
547
+
548
+
549
+
550
+
551
+
552
+
553
+
554
+
555
+
556
+
557
+
558
+
559
+
560
+
561
+
562
+
563
+
564
+
565
+
566
+
567
+
568
+
569
+
570
+
571
+
572
+
573
+
574
+
575
+
576
+
577
+
578
+
579
+
580
+
581
+
582
+
583
+
584
+
585
+
586
+
587
+
588
+
589
+
590
+
591
+
592
+
593
+
594
+
595
+
596
+
597
+
598
+
599
+
600
+
601
+
602
+
603
+
604
+
605
+
606
+
607
+ 65%|█████████████████████████████████████████████████████████████████████████████▉ | 13600/20928 [4:36:22<20:53:20, 10.26s/it]
608
+
609
+
610
+
611
+
612
+
613
+
614
+
615
+
616
+
617
+
618
+
619
+
620
+
621
+
622
+
623
+
624
+
625
+
626
+
627
+
628
+
629
+
630
+
631
+
632
+
633
+
634
+
635
+
636
+
637
+
638
+
639
+
640
+
641
+
642
+
643
+
644
+
645
+
646
+
647
+
648
+
649
+
650
+
651
+
652
+
653
+
654
+
655
+
656
+
657
+
658
+
659
+
660
+
661
+
662
+
663
+
664
+
665
+
666
+
667
+
668
+
669
+
670
+
671
+
672
+
673
+
674
+
675
+
676
+
677
+
678
+
679
+
680
+
681
+
682
+
683
+
684
+
685
+
686
+
687
+
688
+
689
+
690
+
691
+
692
+
693
+
694
+
695
+
696
+
697
+
698
+
699
+
700
+
701
+
702
+
703
+
704
+
705
+
706
+
707
+
708
+ 65%|██████████████████████████████████████████████████████████████████████████████▌ | 13700/20928 [4:56:31<20:22:11, 10.15s/it]
709
+
710
+
711
+
712
+
713
+
714
+
715
+
716
+
717
+
718
+
719
+
720
+
721
+
722
+
723
+
724
+
725
+
726
+
727
+
728
+
729
+
730
+
731
+
732
+
733
+
734
+
735
+
736
+
737
+
738
+
739
+
740
+
741
+
742
+
743
+
744
+
745
+
746
+
747
+
748
+
749
+
750
+
751
+
752
+
753
+
754
+
755
+
756
+
757
+
758
+
759
+
760
+
761
+
762
+
763
+
764
+
765
+
766
+
767
+
768
+
769
+
770
+
771
+
772
+
773
+
774
+
775
+
776
+
777
+
778
+
779
+
780
+
781
+
782
+
783
+
784
+
785
+
786
+
787
+
788
+
789
+
790
+
791
+
792
+
793
+
794
+
795
+
796
+
797
+
798
+
799
+
800
+
801
+
802
+
803
+
804
+
805
+
806
+
807
+
808
+
809
+ 66%|███████████████████████████████████████████████████████████████████████████████▏ | 13800/20928 [5:16:40<20:11:03, 10.19s/it]
810
+
811
+
812
+
813
+
814
+
815
+
816
+
817
+
818
+
819
+
820
+
821
+
822
+
823
+
824
+
825
+
826
+
827
+
828
+
829
+
830
+
831
+
832
+
833
+
834
+
835
+
836
+
837
+
838
+
839
+
840
+
841
+
842
+
843
+
844
+
845
+
846
+
847
+
848
+
849
+
850
+
851
+
852
+
853
+
854
+
855
+
856
+
857
+
858
+
859
+
860
+
861
+
862
+
863
+
864
+
865
+
866
+
867
+
868
+
869
+
870
+
871
+
872
+
873
+
874
+
875
+
876
+
877
+
878
+
879
+
880
+
881
+
882
+
883
+
884
+
885
+
886
+
887
+
888
+
889
+
890
+
891
+
892
+
893
+
894
+
895
+
896
+
897
+
898
+
899
+
900
+
901
+
902
+
903
+
904
+
905
+
906
+
907
+
908
+
909
+
910
+ 66%|███████████████████████████████████████████████████████████████████████████████▋ | 13900/20928 [5:36:46<19:42:38, 10.10s/it]
911
+
912
+
913
+
914
+
915
+
916
+
917
+
918
+
919
+
920
+
921
+
922
+
923
+
924
+
925
+
926
+
927
+
928
+
929
+
930
+
931
+
932
+
933
+
934
+
935
+
936
+
937
+
938
+
939
+
940
+
941
+
942
+
943
+
944
+
945
+
946
+
947
+
948
+
949
+
950
+
951
+
952
+
953
+
954
+
955
+
956
+
957
+
958
+
959
+
960
+
961
+
962
+
963
+
964
+
965
+
966
+
967
+
968
+
969
+
970
+
971
+
972
+
973
+
974
+
975
+
976
+
977
+
978
+
979
+
980
+
981
+
982
+
983
+
984
+
985
+
986
+
987
+
988
+
989
+
990
+
991
+
992
+
993
+
994
+
995
+
996
+
997
+
998
+
999
+
1000
+
1001
+
1002
+
1003
+
1004
+
1005
+
1006
+
1007
+
1008
+
1009
+
1010
+ ***** Running Evaluation *****███████████████████████████████████████████████████████▎ | 14000/20928 [5:56:12<14:37:51, 7.60s/it]
1011
+ Num examples = 16021
1012
+ Batch size = 16
1013
+ {'loss': 0.8372, 'learning_rate': 2.748705621301775e-05, 'epoch': 4.01}
1014
+
1015
+
1016
+
1017
+
1018
+
1019
+
1020
+
1021
+
1022
+
1023
+
1024
+
1025
+
1026
+
1027
+
1028
+
1029
+
1030
+
1031
+
1032
+
1033
+
1034
+
1035
+
1036
+
1037
+
1038
+
1039
+
1040
+
1041
+
1042
+
1043
+
1044
+
1045
+
1046
+
1047
+
1048
+
1049
+
1050
+
1051
+
1052
+
1053
+
1054
+
1055
+
1056
+
1057
+
1058
+
1059
+
1060
+
1061
+
1062
+
1063
+
1064
+
1065
+
1066
+
1067
+
1068
+
1069
+
1070
+
1071
+
1072
+
1073
+
1074
+
1075
+
1076
+
1077
+
1078
+
1079
+
1080
+
1081
+
1082
+
1083
+
1084
+
1085
+
1086
+
1087
+
1088
+
1089
+
1090
+
1091
+
1092
+
1093
+
1094
+
1095
+
1096
+
1097
+
1098
+
1099
+
1100
+
1101
+
1102
+
1103
+
1104
+
1105
+
1106
+
1107
+
1108
+
1109
+
1110
+
1111
+
1112
+
1113
+
1114
+
1115
+
1116
+
1117
+
1118
+
1119
+
1120
+
1121
+
1122
+
1123
+
1124
+
1125
+
1126
+
1127
+
1128
+
1129
+
1130
+
1131
+
1132
+
1133
+
1134
+
1135
+
1136
+
1137
+
1138
+
1139
+
1140
+
1141
+
1142
+
1143
+
1144
+
1145
+
1146
+
1147
+
1148
+
1149
+
1150
+
1151
+
1152
+
1153
+
1154
+
1155
+
1156
+
1157
+
1158
+
1159
+
1160
+
1161
+
1162
+
1163
+
1164
+
1165
+
1166
+
1167
+
1168
+
1169
+
1170
+
1171
+
1172
+
1173
+
1174
+
1175
+
1176
+
1177
+
1178
+
1179
+
1180
+
1181
+
1182
+
1183
+
1184
+
1185
+
1186
+
1187
+
1188
+
1189
+
1190
+
1191
+
1192
+
1193
+
1194
+
1195
+
1196
+
1197
+
1198
+
1199
+
1200
+
1201
+
1202
+
1203
+
1204
+
1205
+
1206
+
1207
+
1208
+
1209
+
1210
+
1211
+
1212
+
1213
+
1214
+
1215
+
1216
+
1217
+
1218
+
1219
+
1220
+
1221
+
1222
+
1223
+
1224
+
1225
+
1226
+
1227
+
1228
+
1229
+
1230
+
1231
+
1232
+
1233
+
1234
+
1235
+
1236
+
1237
+
1238
+
1239
+
1240
+
1241
+
1242
+
1243
+
1244
+
1245
+
1246
+
1247
+
1248
+
1249
+
1250
+
1251
+
1252
+
1253
+
1254
+
1255
+
1256
+
1257
+
1258
+
1259
+
1260
+
1261
+
1262
+
1263
+
1264
+
1265
+
1266
+
1267
+
1268
+
1269
+
1270
+
1271
+
1272
+
1273
+
1274
+
1275
+
1276
+
1277
+
1278
+
1279
+
1280
+
1281
+
1282
+
1283
+
1284
+
1285
+
1286
+
1287
+
1288
+
1289
+
1290
+
1291
+
1292
+
1293
+
1294
+
1295
+
1296
+
1297
+
1298
+
1299
+
1300
+
1301
+
1302
+
1303
+
1304
+
1305
+
1306
+
1307
+
1308
+
1309
+
1310
+
1311
+
1312
+
1313
+
1314
+
1315
+
1316
+
1317
+
1318
+
1319
+
1320
+
1321
+
1322
+
1323
+
1324
+
1325
+
1326
+
1327
+
1328
+
1329
+
1330
+
1331
+
1332
+
1333
+
1334
+
1335
+
1336
+
1337
+
1338
+
1339
+
1340
+
1341
+
1342
+
1343
+
1344
+
1345
+
1346
+
1347
+
1348
+
1349
+
1350
+
1351
+
1352
+
1353
+
1354
+
1355
+
1356
+
1357
+
1358
+
1359
+
1360
+
1361
+
1362
+
1363
+
1364
+
1365
+
1366
+
1367
+
1368
+
1369
+
1370
+
1371
+
1372
+
1373
+
1374
+
1375
+
1376
+
1377
+
1378
+
1379
+
1380
+
1381
+
1382
+
1383
+
1384
+
1385
+
1386
+
1387
+
1388
+
1389
+
1390
+
1391
+
1392
+
1393
+
1394
+
1395
+
1396
+
1397
+
1398
+
1399
+
1400
+
1401
+
1402
+
1403
+
1404
+
1405
+
1406
+
1407
+
1408
+
1409
+
1410
+
1411
+
1412
+
1413
+
1414
+
1415
+
1416
+
1417
+
1418
+
1419
+
1420
+
1421
+
1422
+
1423
+
1424
+
1425
+
1426
+
1427
+
1428
+
1429
+
1430
+
1431
+
1432
+
1433
+
1434
+
1435
+
1436
+
1437
+
1438
+
1439
+
1440
+
1441
+
1442
+
1443
+
1444
+
1445
+
1446
+
1447
+
1448
+
1449
+
1450
+
1451
+
1452
+
1453
+
1454
+
1455
+
1456
+
1457
+
1458
+
1459
+
1460
+
1461
+
1462
+
1463
+
1464
+
1465
+
1466
+
1467
+
1468
+
1469
+
1470
+
1471
+
1472
+
1473
+
1474
+
1475
+
1476
+
1477
+
1478
+
1479
+
1480
+
1481
+
1482
+
1483
+
1484
+
1485
+ 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1002/1002 [15:59<00:00, 1.76it/s]
1486
+ Saving model checkpoint to ./checkpoint-14000
1487
+ Configuration saved in ./checkpoint-14000/config.json████████████████████████████████▎ | 14000/20928 [6:12:25<14:37:51, 7.60s/it]
1488
+ Model weights saved in ./checkpoint-14000/pytorch_model.bin
1489
+ Configuration saved in ./checkpoint-14000/preprocessor_config.json
1490
+ Configuration saved in ./preprocessor_config.json
1491
+ 02/07/2022 02:32:50 - WARNING - huggingface_hub.repository - Adding files tracked by Git LFS: ['wandb/run-20220206_201634-uhiy9e2t/run-uhiy9e2t.wandb']. This may take a bit of time if the files are large.
wandb/run-20220206_201634-uhiy9e2t/files/requirements.txt ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ aiohttp==3.8.1
2
+ aiosignal==1.2.0
3
+ analytics-python==1.4.0
4
+ anyio==3.5.0
5
+ appdirs==1.4.4
6
+ argon2-cffi-bindings==21.2.0
7
+ argon2-cffi==21.3.0
8
+ asgiref==3.5.0
9
+ asttokens==2.0.5
10
+ async-timeout==4.0.2
11
+ attrs==21.4.0
12
+ audioread==2.1.9
13
+ backcall==0.2.0
14
+ backoff==1.10.0
15
+ bcrypt==3.2.0
16
+ beautifulsoup4==4.9.3
17
+ black==22.1.0
18
+ bleach==4.1.0
19
+ brotlipy==0.7.0
20
+ certifi==2020.12.5
21
+ cffi==1.14.3
22
+ chardet==3.0.4
23
+ charset-normalizer==2.0.11
24
+ click==8.0.3
25
+ conda-build==3.21.4
26
+ conda-package-handling==1.7.2
27
+ conda==4.9.2
28
+ cryptography==3.2.1
29
+ cycler==0.11.0
30
+ datasets==1.18.3.dev0
31
+ debugpy==1.5.1
32
+ decorator==4.4.2
33
+ defusedxml==0.7.1
34
+ dill==0.3.4
35
+ dnspython==2.1.0
36
+ docker-pycreds==0.4.0
37
+ entrypoints==0.3
38
+ executing==0.8.2
39
+ fastapi==0.73.0
40
+ ffmpy==0.3.0
41
+ filelock==3.0.12
42
+ fonttools==4.29.1
43
+ frozenlist==1.3.0
44
+ fsspec==2022.1.0
45
+ gitdb==4.0.9
46
+ gitpython==3.1.26
47
+ glob2==0.7
48
+ gradio==2.7.5.2
49
+ h11==0.13.0
50
+ huggingface-hub==0.4.0
51
+ hypothesis==6.36.1
52
+ idna==2.10
53
+ importlib-resources==5.4.0
54
+ ipykernel==6.7.0
55
+ ipython-genutils==0.2.0
56
+ ipython==8.0.1
57
+ ipywidgets==7.6.3
58
+ jedi==0.17.0
59
+ jinja2==2.11.3
60
+ jiwer==2.3.0
61
+ joblib==1.1.0
62
+ json5==0.9.6
63
+ jsonschema==4.4.0
64
+ jupyter-client==7.1.2
65
+ jupyter-core==4.9.1
66
+ jupyterlab-pygments==0.1.2
67
+ jupyterlab-server==1.2.0
68
+ jupyterlab-widgets==1.0.2
69
+ jupyterlab==2.2.9
70
+ kiwisolver==1.3.2
71
+ libarchive-c==2.9
72
+ librosa==0.8.1
73
+ llvmlite==0.38.0
74
+ markdown2==2.4.2
75
+ markupsafe==1.1.1
76
+ matplotlib-inline==0.1.3
77
+ matplotlib==3.5.1
78
+ mistune==0.8.4
79
+ mkl-fft==1.3.0
80
+ mkl-random==1.1.1
81
+ mkl-service==2.3.0
82
+ monotonic==1.6
83
+ multidict==6.0.2
84
+ multiprocess==0.70.12.2
85
+ mypy-extensions==0.4.3
86
+ nano==0.10.0
87
+ nbclient==0.5.10
88
+ nbconvert==6.4.1
89
+ nbformat==5.1.3
90
+ nest-asyncio==1.5.4
91
+ notebook==6.4.8
92
+ numba==0.55.1
93
+ numpy==1.19.2
94
+ olefile==0.46
95
+ packaging==21.3
96
+ pandas==1.4.0
97
+ pandocfilters==1.5.0
98
+ paramiko==2.9.2
99
+ parso==0.8.1
100
+ pathspec==0.9.0
101
+ pathtools==0.1.2
102
+ pexpect==4.8.0
103
+ pickleshare==0.7.5
104
+ pillow==8.1.2
105
+ pip==22.0.2
106
+ pkginfo==1.7.0
107
+ platformdirs==2.4.1
108
+ pooch==1.6.0
109
+ prometheus-client==0.13.1
110
+ promise==2.3
111
+ prompt-toolkit==3.0.8
112
+ protobuf==3.19.4
113
+ psutil==5.8.0
114
+ ptyprocess==0.7.0
115
+ pure-eval==0.2.2
116
+ pyarrow==6.0.1
117
+ pycosat==0.6.3
118
+ pycparser==2.20
119
+ pycryptodome==3.14.0
120
+ pyctcdecode==0.3.0
121
+ pydantic==1.9.0
122
+ pydub==0.25.1
123
+ pygments==2.8.0
124
+ pygtrie==2.4.2
125
+ pynacl==1.5.0
126
+ pyopenssl==19.1.0
127
+ pyparsing==3.0.7
128
+ pypi-kenlm==0.1.20210121
129
+ pyrsistent==0.18.1
130
+ pysocks==1.7.1
131
+ python-dateutil==2.8.2
132
+ python-etcd==0.4.5
133
+ python-levenshtein==0.12.2
134
+ python-multipart==0.0.5
135
+ pytz==2021.1
136
+ pyyaml==5.4.1
137
+ pyzmq==22.3.0
138
+ regex==2022.1.18
139
+ requests==2.24.0
140
+ resampy==0.2.2
141
+ ruamel-yaml==0.15.87
142
+ sacremoses==0.0.47
143
+ scikit-learn==1.0.2
144
+ scipy==1.7.3
145
+ send2trash==1.8.0
146
+ sentry-sdk==1.5.4
147
+ setuptools==50.3.1.post20201107
148
+ shortuuid==1.0.8
149
+ six==1.15.0
150
+ smmap==5.0.0
151
+ sniffio==1.2.0
152
+ sortedcontainers==2.4.0
153
+ soundfile==0.10.3.post1
154
+ soupsieve==2.2
155
+ stack-data==0.1.4
156
+ starlette==0.17.1
157
+ termcolor==1.1.0
158
+ terminado==0.13.1
159
+ testpath==0.5.0
160
+ threadpoolctl==3.1.0
161
+ tokenizers==0.11.4
162
+ tomli==2.0.0
163
+ torch==1.10.2
164
+ torchaudio==0.10.2
165
+ torchelastic==0.2.2
166
+ torchtext==0.9.1
167
+ torchvision==0.9.1
168
+ tornado==6.1
169
+ tqdm==4.62.3
170
+ traitlets==5.1.1
171
+ transformers==4.17.0.dev0
172
+ typing-extensions==4.0.1
173
+ urllib3==1.25.11
174
+ uvicorn==0.17.1
175
+ wandb==0.12.10
176
+ wcwidth==0.2.5
177
+ webencodings==0.5.1
178
+ wheel==0.35.1
179
+ widgetsnbextension==3.5.2
180
+ xxhash==2.0.2
181
+ yarl==1.7.2
182
+ yaspin==2.1.0
183
+ zipp==3.7.0
wandb/run-20220206_201634-uhiy9e2t/files/wandb-metadata.json ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "os": "Linux-4.15.0-151-generic-x86_64-with-glibc2.10",
3
+ "python": "3.8.8",
4
+ "heartbeatAt": "2022-02-06T20:16:36.272327",
5
+ "startedAt": "2022-02-06T20:16:34.257348",
6
+ "docker": null,
7
+ "gpu": "Tesla V100S-PCIE-32GB",
8
+ "gpu_count": 1,
9
+ "cpu_count": 60,
10
+ "cuda": null,
11
+ "args": [
12
+ "--activation_dropout=0.1",
13
+ "--dataset_name=mozilla-foundation/common_voice_8_0",
14
+ "--dataset_config_name=fr",
15
+ "--eval_steps=1000",
16
+ "--evaluation_strategy=steps",
17
+ "--feat_proj_dropout=0.0",
18
+ "--freeze_feature_encoder",
19
+ "--fp16",
20
+ "--gradient_accumulation_steps=8",
21
+ "--gradient_checkpointing",
22
+ "--group_by_length",
23
+ "--layerdrop=0.0",
24
+ "--learning_rate=7.5e-5",
25
+ "--length_column_name=input_length",
26
+ "--load_best_model_at_end",
27
+ "--logging_steps=100",
28
+ "--mask_feature_length=64",
29
+ "--mask_feature_prob=0.25",
30
+ "--mask_time_length=10",
31
+ "--mask_time_prob=0.75",
32
+ "--model_name_or_path=./checkpoint-13000",
33
+ "--num_train_epochs=6.0",
34
+ "--output_dir=./",
35
+ "--overwrite_output_dir",
36
+ "--per_device_train_batch_size=16",
37
+ "--per_device_eval_batch_size=16",
38
+ "--preprocessing_num_workers=4",
39
+ "--push_to_hub",
40
+ "--report_to=wandb",
41
+ "--save_steps=1000",
42
+ "--save_total_limit=3",
43
+ "--text_column_name=sentence",
44
+ "--use_auth_token",
45
+ "--warmup_steps=2000",
46
+ "--do_train",
47
+ "--do_eval"
48
+ ],
49
+ "state": "running",
50
+ "program": "run_speech_recognition_ctc.py",
51
+ "codePath": "run_speech_recognition_ctc.py",
52
+ "git": {
53
+ "remote": "https://huggingface.co/Plim/xls-r-1b-cv_8-fr",
54
+ "commit": "89ae304fd007aa488056ada57d1062398d37739d"
55
+ },
56
+ "email": "lim.pascal93@gmail.com",
57
+ "root": "/workspace/xls-r-1b-cv_8-fr",
58
+ "host": "job-597becdf-05fc-498e-bdc5-d363b0af8ddd",
59
+ "username": "ovh",
60
+ "executable": "/opt/conda/bin/python"
61
+ }
wandb/run-20220206_201634-uhiy9e2t/files/wandb-summary.json ADDED
The diff for this file is too large to render. See raw diff
wandb/run-20220206_201634-uhiy9e2t/logs/debug-internal.log ADDED
The diff for this file is too large to render. See raw diff
wandb/run-20220206_201634-uhiy9e2t/logs/debug.log ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2022-02-06 20:16:34,262 INFO MainThread:9578 [wandb_setup.py:_flush():75] Loading settings from /workspace/.config/wandb/settings
2
+ 2022-02-06 20:16:34,262 INFO MainThread:9578 [wandb_setup.py:_flush():75] Loading settings from /workspace/xls-r-1b-cv_8-fr/wandb/settings
3
+ 2022-02-06 20:16:34,262 INFO MainThread:9578 [wandb_setup.py:_flush():75] Loading settings from environment variables: {'project': 'xls-r-1b-cv_8-fr'}
4
+ 2022-02-06 20:16:34,262 INFO MainThread:9578 [wandb_setup.py:_flush():75] Inferring run settings from compute environment: {'program_relpath': 'run_speech_recognition_ctc.py', 'program': 'run_speech_recognition_ctc.py'}
5
+ 2022-02-06 20:16:34,262 INFO MainThread:9578 [wandb_init.py:_log_setup():386] Logging user logs to /workspace/xls-r-1b-cv_8-fr/wandb/run-20220206_201634-uhiy9e2t/logs/debug.log
6
+ 2022-02-06 20:16:34,263 INFO MainThread:9578 [wandb_init.py:_log_setup():387] Logging internal logs to /workspace/xls-r-1b-cv_8-fr/wandb/run-20220206_201634-uhiy9e2t/logs/debug-internal.log
7
+ 2022-02-06 20:16:34,263 INFO MainThread:9578 [wandb_init.py:init():420] calling init triggers
8
+ 2022-02-06 20:16:34,263 INFO MainThread:9578 [wandb_init.py:init():425] wandb.init called with sweep_config: {}
9
+ config: {}
10
+ 2022-02-06 20:16:34,263 INFO MainThread:9578 [wandb_init.py:init():471] starting backend
11
+ 2022-02-06 20:16:34,263 INFO MainThread:9578 [backend.py:_multiprocessing_setup():99] multiprocessing start_methods=fork,spawn,forkserver, using: spawn
12
+ 2022-02-06 20:16:34,587 INFO MainThread:9578 [backend.py:ensure_launched():219] starting backend process...
13
+ 2022-02-06 20:16:34,912 INFO MainThread:9578 [backend.py:ensure_launched():224] started backend process with pid: 10249
14
+ 2022-02-06 20:16:34,915 INFO MainThread:9578 [wandb_init.py:init():480] backend started and connected
15
+ 2022-02-06 20:16:34,924 INFO MainThread:9578 [wandb_init.py:init():550] updated telemetry
16
+ 2022-02-06 20:16:35,517 INFO MainThread:9578 [wandb_init.py:init():581] communicating current version
17
+ 2022-02-06 20:16:36,068 INFO MainThread:9578 [wandb_init.py:init():586] got version response
18
+ 2022-02-06 20:16:36,068 INFO MainThread:9578 [wandb_init.py:init():596] communicating run to backend with 30 second timeout
19
+ 2022-02-06 20:16:36,262 INFO MainThread:9578 [wandb_init.py:init():624] starting run threads in backend
20
+ 2022-02-06 20:16:36,857 INFO MainThread:9578 [wandb_run.py:_console_start():1827] atexit reg
21
+ 2022-02-06 20:16:36,858 INFO MainThread:9578 [wandb_run.py:_redirect():1701] redirect: SettingsConsole.REDIRECT
22
+ 2022-02-06 20:16:36,859 INFO MainThread:9578 [wandb_run.py:_redirect():1706] Redirecting console.
23
+ 2022-02-06 20:16:36,866 INFO MainThread:9578 [wandb_run.py:_redirect():1762] Redirects installed.
24
+ 2022-02-06 20:16:36,866 INFO MainThread:9578 [wandb_init.py:init():651] run started, returning control to user process
25
+ 2022-02-06 20:16:36,869 INFO MainThread:9578 [wandb_run.py:_config_callback():966] config_cb None None {'return_dict': True, 'output_hidden_states': False, 'output_attentions': False, 'torchscript': False, 'torch_dtype': 'float32', 'use_bfloat16': False, 'pruned_heads': {}, 'tie_word_embeddings': True, 'is_encoder_decoder': False, 'is_decoder': False, 'cross_attention_hidden_size': None, 'add_cross_attention': False, 'tie_encoder_decoder': False, 'max_length': 20, 'min_length': 0, 'do_sample': False, 'early_stopping': False, 'num_beams': 1, 'num_beam_groups': 1, 'diversity_penalty': 0.0, 'temperature': 1.0, 'top_k': 50, 'top_p': 1.0, 'repetition_penalty': 1.0, 'length_penalty': 1.0, 'no_repeat_ngram_size': 0, 'encoder_no_repeat_ngram_size': 0, 'bad_words_ids': None, 'num_return_sequences': 1, 'chunk_size_feed_forward': 0, 'output_scores': False, 'return_dict_in_generate': False, 'forced_bos_token_id': None, 'forced_eos_token_id': None, 'remove_invalid_values': False, 'architectures': ['Wav2Vec2ForCTC'], 'finetuning_task': None, 'id2label': {0: 'LABEL_0', 1: 'LABEL_1'}, 'label2id': {'LABEL_0': 0, 'LABEL_1': 1}, 'tokenizer_class': None, 'prefix': None, 'bos_token_id': 1, 'pad_token_id': 45, 'eos_token_id': 2, 'sep_token_id': None, 'decoder_start_token_id': None, 'task_specific_params': None, 'problem_type': None, '_name_or_path': './checkpoint-13000', 'transformers_version': '4.17.0.dev0', 'feat_extract_dropout': 0.0, 'model_type': 'wav2vec2', 'num_feat_extract_layers': 7, 'hidden_size': 1280, 'feat_extract_norm': 'layer', 'feat_extract_activation': 'gelu', 'conv_dim': [512, 512, 512, 512, 512, 512, 512], 'conv_stride': [5, 2, 2, 2, 2, 2, 2], 'conv_kernel': [10, 3, 3, 3, 3, 2, 2], 'conv_bias': True, 'num_conv_pos_embeddings': 128, 'num_conv_pos_embedding_groups': 16, 'num_hidden_layers': 48, 'intermediate_size': 5120, 'hidden_act': 'gelu', 'num_attention_heads': 16, 'hidden_dropout': 0.0, 'attention_dropout': 0.0, 'activation_dropout': 0.1, 'feat_proj_dropout': 0.0, 'final_dropout': 0.0, 'layerdrop': 0.0, 'layer_norm_eps': 1e-05, 'initializer_range': 0.02, 'vocab_size': 46, 'do_stable_layer_norm': True, 'use_weighted_layer_sum': False, 'apply_spec_augment': True, 'mask_time_prob': 0.75, 'mask_time_length': 10, 'mask_time_min_masks': 2, 'mask_feature_prob': 0.25, 'mask_feature_length': 64, 'mask_feature_min_masks': 0, 'num_codevectors_per_group': 320, 'num_codevector_groups': 2, 'contrastive_logits_temperature': 0.1, 'feat_quantizer_dropout': 0.0, 'num_negatives': 100, 'codevector_dim': 1024, 'proj_codevector_dim': 1024, 'diversity_loss_weight': 0.1, 'ctc_loss_reduction': 'mean', 'ctc_zero_infinity': False, 'add_adapter': False, 'adapter_kernel_size': 3, 'adapter_stride': 2, 'num_adapter_layers': 3, 'output_hidden_size': 1280, 'classifier_proj_size': 256, 'tdnn_dim': [512, 512, 512, 512, 1500], 'tdnn_kernel': [5, 3, 3, 1, 1], 'tdnn_dilation': [1, 2, 3, 1, 1], 'xvector_output_dim': 512, 'output_dir': './', 'overwrite_output_dir': True, 'do_train': True, 'do_eval': True, 'do_predict': False, 'evaluation_strategy': 'steps', 'prediction_loss_only': False, 'per_device_train_batch_size': 16, 'per_device_eval_batch_size': 16, 'per_gpu_train_batch_size': 'None', 'per_gpu_eval_batch_size': 'None', 'gradient_accumulation_steps': 8, 'eval_accumulation_steps': 'None', 'learning_rate': 7.5e-05, 'weight_decay': 0.0, 'adam_beta1': 0.9, 'adam_beta2': 0.999, 'adam_epsilon': 1e-08, 'max_grad_norm': 1.0, 'num_train_epochs': 6.0, 'max_steps': -1, 'lr_scheduler_type': 'linear', 'warmup_ratio': 0.0, 'warmup_steps': 2000, 'log_level': -1, 'log_level_replica': -1, 'log_on_each_node': True, 'logging_dir': './runs/Feb06_20-15-02_job-597becdf-05fc-498e-bdc5-d363b0af8ddd', 'logging_strategy': 'steps', 'logging_first_step': False, 'logging_steps': 100, 'logging_nan_inf_filter': True, 'save_strategy': 'steps', 'save_steps': 1000, 'save_total_limit': 3, 'save_on_each_node': False, 'no_cuda': False, 'seed': 42, 'bf16': False, 'fp16': True, 'fp16_opt_level': 'O1', 'half_precision_backend': 'amp', 'bf16_full_eval': False, 'fp16_full_eval': False, 'tf32': 'None', 'local_rank': -1, 'xpu_backend': 'None', 'tpu_num_cores': 'None', 'tpu_metrics_debug': False, 'debug': '[]', 'dataloader_drop_last': False, 'eval_steps': 1000, 'dataloader_num_workers': 0, 'past_index': -1, 'run_name': './', 'disable_tqdm': False, 'remove_unused_columns': True, 'label_names': 'None', 'load_best_model_at_end': True, 'metric_for_best_model': 'loss', 'greater_is_better': False, 'ignore_data_skip': False, 'sharded_ddp': '[]', 'deepspeed': 'None', 'label_smoothing_factor': 0.0, 'optim': 'adamw_hf', 'adafactor': False, 'group_by_length': True, 'length_column_name': 'input_length', 'report_to': "['wandb']", 'ddp_find_unused_parameters': 'None', 'ddp_bucket_cap_mb': 'None', 'dataloader_pin_memory': True, 'skip_memory_metrics': True, 'use_legacy_prediction_loop': False, 'push_to_hub': True, 'resume_from_checkpoint': 'None', 'hub_model_id': 'None', 'hub_strategy': 'every_save', 'hub_token': '<HUB_TOKEN>', 'gradient_checkpointing': True, 'fp16_backend': 'auto', 'push_to_hub_model_id': 'None', 'push_to_hub_organization': 'None', 'push_to_hub_token': '<PUSH_TO_HUB_TOKEN>', '_n_gpu': 1, 'mp_parameters': '', 'train_batch_size': 16, 'eval_batch_size': 16}
26
+ 2022-02-06 20:16:36,875 INFO MainThread:9578 [wandb_watch.py:watch():43] Watching
wandb/run-20220206_201634-uhiy9e2t/run-uhiy9e2t.wandb ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:288dff72634d18c03ad9c9cc760f78811e6d92ceb73634d6eb187ac9fe19a74b
3
+ size 15081380