arampacha's picture
Update README.md
19cce49
|
raw
history blame
3.48 kB
---
language:
- hy-AM
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_8_0
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-1b-hy-cv
results:
- task:
type: automatic-speech-recognition
name: Speech Recognition
dataset:
type: mozilla-foundation/common_voice_8_0
name: Common Voice hy-AM
args: hy-AM
metrics:
- type: wer
value: 10.92896174863388
name: WER LM
- type: cer
value: 2.3773394031360646
name: CER LM
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: Robust Speech Event - Dev Data
type: speech-recognition-community-v2/dev_data
args: hy
metrics:
- name: Test WER
type: wer
value: 19.942816297355254
- name: Test CER
type: cer
value: 7.332618465282714
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Wav2Vec2-XLS-R-1b-hy
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the /WORKSPACE/DATA/HY/NOIZY_STUDENT_3/ - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1827
- Wer: 0.2389
- Cer: 0.0427
- Wer LM: 0.1093
- Cer LM: 0.0238
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8e-05
- train_batch_size: 16
- eval_batch_size: 64
- seed: 842
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3200
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
| 4.0311 | 3.51 | 200 | 0.7943 | 0.8981 | 0.2374 |
| 1.4388 | 7.02 | 400 | 0.2546 | 0.3821 | 0.0658 |
| 1.0949 | 10.53 | 600 | 0.2201 | 0.3216 | 0.0573 |
| 1.0279 | 14.04 | 800 | 0.2250 | 0.3271 | 0.0583 |
| 0.9923 | 17.54 | 1000 | 0.2074 | 0.3111 | 0.0543 |
| 0.972 | 21.05 | 1200 | 0.2165 | 0.2955 | 0.0536 |
| 0.9587 | 24.56 | 1400 | 0.2064 | 0.3017 | 0.0535 |
| 0.9421 | 28.07 | 1600 | 0.2062 | 0.2884 | 0.0519 |
| 0.9189 | 31.58 | 1800 | 0.2014 | 0.2822 | 0.0507 |
| 0.8919 | 35.09 | 2000 | 0.1952 | 0.2689 | 0.0488 |
| 0.8615 | 38.6 | 2200 | 0.2020 | 0.2685 | 0.0480 |
| 0.834 | 42.11 | 2400 | 0.2001 | 0.2654 | 0.0467 |
| 0.8056 | 45.61 | 2600 | 0.1935 | 0.2498 | 0.0448 |
| 0.7888 | 49.12 | 2800 | 0.1892 | 0.2451 | 0.0446 |
| 0.761 | 52.63 | 3000 | 0.1884 | 0.2432 | 0.0441 |
| 0.742 | 56.14 | 3200 | 0.1827 | 0.2389 | 0.0427 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0