Jzuluaga's picture
updating the repo with the fine-tuned model
9200e48
|
raw
history blame
2.78 kB
---
license: apache-2.0
tags:
- automatic-speech-recognition
- experiments/data/atcosim_uwb_atcc/train
- generated_from_trainer
metrics:
- wer
model-index:
- name: 0.0ld_0.0ad_0.0attd_0.05fpd_0.075mtp_12mtl_0.0mfp_12mfl_1acc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 0.0ld_0.0ad_0.0attd_0.05fpd_0.075mtp_12mtl_0.0mfp_12mfl_1acc
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the EXPERIMENTS/DATA/ATCOSIM_UWB_ATCC/TRAIN - NA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5595
- Wer: 0.1687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.63 | 500 | 3.0458 | 1.0 |
| 2.9181 | 1.27 | 1000 | 1.1503 | 0.4723 |
| 2.9181 | 1.9 | 1500 | 0.8275 | 0.3500 |
| 0.7646 | 2.53 | 2000 | 0.6990 | 0.2845 |
| 0.7646 | 3.17 | 2500 | 0.5828 | 0.2509 |
| 0.5394 | 3.8 | 3000 | 0.5363 | 0.2487 |
| 0.5394 | 4.44 | 3500 | 0.5467 | 0.2171 |
| 0.4558 | 5.07 | 4000 | 0.5290 | 0.2090 |
| 0.4558 | 5.7 | 4500 | 0.4992 | 0.2046 |
| 0.3773 | 6.34 | 5000 | 0.4934 | 0.2052 |
| 0.3773 | 6.97 | 5500 | 0.4700 | 0.1983 |
| 0.3301 | 7.6 | 6000 | 0.4938 | 0.1874 |
| 0.3301 | 8.24 | 6500 | 0.5364 | 0.1893 |
| 0.2938 | 8.87 | 7000 | 0.5170 | 0.1830 |
| 0.2938 | 9.51 | 7500 | 0.5408 | 0.1815 |
| 0.2674 | 10.14 | 8000 | 0.5581 | 0.1733 |
| 0.2674 | 10.77 | 8500 | 0.5389 | 0.1719 |
| 0.24 | 11.41 | 9000 | 0.5344 | 0.1714 |
| 0.24 | 12.04 | 9500 | 0.5503 | 0.1686 |
| 0.211 | 12.67 | 10000 | 0.5595 | 0.1687 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2