Pedro Cuenca commited on
Commit
37ae5a5
1 Parent(s): 563f9ac

* Latest version of the model after some additional training.

Browse files
Files changed (2) hide show
  1. README.md +34 -3
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -23,7 +23,7 @@ model-index:
23
  metrics:
24
  - name: Test WER
25
  type: wer
26
- value: 11.74
27
  ---
28
 
29
  # Wav2Vec2-Large-XLSR-53-Spanish
@@ -179,12 +179,43 @@ print("WER: {:2f}".format(100 * chunked_wer(result["sentence"], result["pred_str
179
 
180
  ```
181
 
182
- **Test Result**: 11.74 %
183
 
 
 
 
 
 
 
 
184
 
185
  ## Training
186
 
187
  The Common Voice `train` and `validation` datasets were used for training.
188
 
189
- Training details TBD (I did it incrementally, don't have a self-contained script right now).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
190
 
23
  metrics:
24
  - name: Test WER
25
  type: wer
26
+ value: 10.61
27
  ---
28
 
29
  # Wav2Vec2-Large-XLSR-53-Spanish
179
 
180
  ```
181
 
182
+ **Test Result**: 10.61 %
183
 
184
+ ## Text processing
185
+
186
+ The Common Voice `es` dataset has a lot of characters that don't belong to the Spanish language, even after discarding separators and punctuators. I made some translations and discarded most of the extraneous characters.
187
+
188
+ I decided to keep all the Spanish language diacritics. This is a difficult decision. Some times the diacritics are added just because of ortography rules, but they don't alter the meaning of the word. In other cases, however, the diacritics carry meaning, as they disambiguate among different senses. A better WER score would surely have been achieved using just the non-accented characters, and the resulting text would be understood by Spanish speakers. Nevertheless, I think keeping them is "more correct".
189
+
190
+ All the rules I applied are shown in the evaluation script.
191
 
192
  ## Training
193
 
194
  The Common Voice `train` and `validation` datasets were used for training.
195
 
196
+ For dataset handling reasons, I initially split `train`+`validation` in 10% splits so I could see progress earlier and react if needed.
197
+
198
+ * I trained for 30 epochs on the first split only, using similar values as the ones proposed by Patrick in his demo notebook. I used a batch_size of 24 with 2 gradient accumulation steps. This gave a WER of about 16.3%on the full test set.
199
+ * I then trained the resulting model on the 9 remaining splits, for 3 epochs each, but with a faster warmup of 75 steps.
200
+ * Next, I trained 3 epochs on each of the 10 splits using a smaller learning rate of `1e-4`. A warmup of 75 steps was used in this case too. The final model had a WER of about 11.7%.
201
+ * By this time we had already figured out the reason for the initial delay in training time, and I decided to use the full dataset for training. However, in my tests I had seen that varying the learning rate seemed to work well, so I wanted to replicate that. I selected a cosine schedule with hard restarts, a reference learning rate of `3e-5` and 10 epochs. I configured the cosine schedule to have 10 cycles too, and used no warmup. This produced a WER of ~10.6%.
202
+
203
+
204
+ ## Other things I tried
205
+
206
+ * Starting from the same fine-tuned model, I compared a constant lr of 1e-4 against a linear schedule with warmup. The linear schedule worked better (11.85 vs 12.72 WER%).
207
+ * I tried to use a Spanish model to improve a Basque one. I transformed the text to make ortography more similar to the target language, but the Basque model did not improve.
208
+ * Label smoothing did not work.
209
+
210
+ ## Issues and other technical challenges
211
+
212
+ I had previously used the `transformers` library as an end user, just to try Bert on some tasks, but this is the first time I have needed to look into the code.
213
+
214
+ * The `Datasets` abstraction is great because, being based on memory-mapped files, it allows arbitrarily-sized datasets to be processed. However, it is important to understand its limitations and trade-offs. I found caching convenient, but disk usage explodes fast. I keep the datasets for my current projects in a 1 TB, fast SSD disk, and a couple of times I ran out of space. I had to understand how cache files are stored and learn when it's best to disable caching and manually save when you need to. I found that data exploration is better suited for smaller datasets or sampled ones, but actual processing is most efficient when you have identified the transformations you need and apply them in a single `map` operation.
215
+
216
+ * There was a noticeable delay before training started. Fortunately, we found the reason why, discussed it in Slack and the forums and created a workaround.
217
+
218
+ * The WER metric crashed on large datasets. I evaluated on a small sample (also, it's faster) and wrote an accumulative version of wer that runs on fixed memory. I'd like to verify whether this change makes sense to be used inside the training loop.
219
+
220
+ * When using `num_proc` inside a notebook, I could not see progress bars. This is surely some permissions issue in my computer. I still need to find it out.
221
 
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2a4acd6f6ca614438fc757d637d25e6aad7e610453bd59f17bea91c79f53fc20
3
  size 1262081431
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:17a38614548f8087d0d653d65bc73f1718ade8b1aee92c220431bb092c444bb0
3
  size 1262081431