|
--- |
|
language: ary |
|
metrics: |
|
- wer |
|
tags: |
|
- audio |
|
- automatic-speech-recognition |
|
- speech |
|
- xlsr-fine-tuning-week |
|
license: apache-2.0 |
|
model-index: |
|
- name: XLSR Wav2Vec2 Moroccan Arabic dialect by Boumehdi |
|
results: |
|
- task: |
|
name: Speech Recognition |
|
type: automatic-speech-recognition |
|
metrics: |
|
- name: Test WER |
|
type: wer |
|
value: 0.09 |
|
--- |
|
# Wav2Vec2-Large-XLSR-53-Moroccan-Darija |
|
|
|
**wav2vec2-large-xlsr-53** fine-tuned on 120 hours of labeled Darija Audios |
|
|
|
## Usage |
|
|
|
The model can be used directly as follows: |
|
|
|
```python |
|
import librosa |
|
import torch |
|
from transformers import Wav2Vec2CTCTokenizer, Wav2Vec2ForCTC, Wav2Vec2Processor, TrainingArguments, Wav2Vec2FeatureExtractor, Trainer |
|
|
|
tokenizer = Wav2Vec2CTCTokenizer("./vocab.json", unk_token="[UNK]", pad_token="[PAD]", word_delimiter_token="|") |
|
processor = Wav2Vec2Processor.from_pretrained('boumehdi/wav2vec2-large-xlsr-moroccan-darija', tokenizer=tokenizer) |
|
model=Wav2Vec2ForCTC.from_pretrained('boumehdi/wav2vec2-large-xlsr-moroccan-darija') |
|
|
|
|
|
# load the audio data (use your own wav file here!) |
|
input_audio, sr = librosa.load('file.wav', sr=16000) |
|
|
|
# tokenize |
|
input_values = processor(input_audio, return_tensors="pt", padding=True).input_values |
|
|
|
# retrieve logits |
|
logits = model(input_values).logits |
|
|
|
tokens=torch.argmax(logits, axis=-1) |
|
|
|
# decode using n-gram |
|
transcription = tokenizer.batch_decode(tokens) |
|
|
|
# print the output |
|
print(transcription) |
|
``` |
|
|
|
Here's the output: قالت ليا هاد السيد هادا ما كاينش بحالو |
|
|
|
## Evaluation & Previous works |
|
|
|
==================================== |
|
|
|
-v5 working on it and might add some common French words (mais, alors, donc, ...) |
|
|
|
It is going to be hard to see a great improvement from now on with my Nvidia GTX1070 ti :(, my VRAM is only 8Gb. |
|
|
|
-v4 |
|
|
|
**Wer**: 0.09 |
|
|
|
**Training Loss**: 13.59 |
|
|
|
**Validation Loss**: 0.07 |
|
|
|
fixed a problem when dealing with أ and ى and إ |
|
|
|
-v3 (fine-tuned on 11 hours of audio + changed hyperparameters + discovered a huge mistake when using the letter 'ا' that has improved the WER dramatically) |
|
|
|
**Wer**: 22.86 |
|
|
|
**Training Loss**: 12.09 |
|
|
|
**Validation Loss**: 33.04 |
|
|
|
The validation loss goes down as we add more data for training. |
|
|
|
Further training to decrease the training Loss makes this model overfit a little bit. |
|
|
|
==================================== |
|
|
|
-v2 (fine-tuned on 9 hours of audio + replaced أ and ى and إ with ا as it creates a lot of problems + tried to standardize the Moroccan Darija) |
|
|
|
**Wer**: 44.30 |
|
|
|
**Training Loss**: 12.99 |
|
|
|
**Validation Loss**: 36.93 |
|
|
|
Validation Loss has decreased on this version which means that the model can more generalize for unknown data compared to the previous version. |
|
|
|
The validation loss is still high also because the validation data contains words that have never been trained before. The solution is to add more data and more hours of training. |
|
|
|
Further training to decrease the training Loss makes this model overfit a little bit. |
|
|
|
==================================== |
|
|
|
-v1 (fine-tuned on 6 hours of audio) |
|
|
|
**Wer**: 49.68 |
|
|
|
**Training Loss**: 9.88 |
|
|
|
**Validation Loss**: 45.24 |
|
|
|
==================================== |
|
|
|
## Future Work |
|
|
|
I am currently working on improving this model. |
|
|
|
email: souregh@gmail.com |
|
|