|
--- |
|
license: apache-2.0 |
|
tags: |
|
- generated_from_trainer |
|
metrics: |
|
- f1 |
|
model-index: |
|
- name: wav2vec2-base-intent-classification-ori-f1 |
|
results: [] |
|
--- |
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
# wav2vec2-base-intent-classification-ori-f1 |
|
|
|
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. |
|
It achieves the following results on the evaluation set: |
|
- Loss: 1.1637 |
|
- F1: 0.7917 |
|
|
|
## Model description |
|
|
|
More information needed |
|
|
|
## Intended uses & limitations |
|
|
|
More information needed |
|
|
|
## Training and evaluation data |
|
|
|
More information needed |
|
|
|
## Training procedure |
|
|
|
### Training hyperparameters |
|
|
|
The following hyperparameters were used during training: |
|
- learning_rate: 3e-05 |
|
- train_batch_size: 1 |
|
- eval_batch_size: 1 |
|
- seed: 42 |
|
- gradient_accumulation_steps: 4 |
|
- total_train_batch_size: 4 |
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
- lr_scheduler_type: linear |
|
- lr_scheduler_warmup_ratio: 0.1 |
|
- num_epochs: 45 |
|
|
|
### Training results |
|
|
|
| Training Loss | Epoch | Step | Validation Loss | F1 | |
|
|:-------------:|:-----:|:----:|:---------------:|:------:| |
|
| 2.1968 | 1.0 | 28 | 2.1818 | 0.3333 | |
|
| 2.1592 | 2.0 | 56 | 2.0969 | 0.3333 | |
|
| 2.0755 | 3.0 | 84 | 2.0228 | 0.3333 | |
|
| 1.9803 | 4.0 | 112 | 1.8093 | 0.4167 | |
|
| 1.8034 | 5.0 | 140 | 1.6328 | 0.4375 | |
|
| 1.8118 | 6.0 | 168 | 1.5828 | 0.4583 | |
|
| 1.583 | 7.0 | 196 | 1.5873 | 0.4375 | |
|
| 1.4798 | 8.0 | 224 | 1.3843 | 0.5417 | |
|
| 1.1876 | 9.0 | 252 | 1.1997 | 0.5417 | |
|
| 1.2811 | 10.0 | 280 | 1.0566 | 0.6458 | |
|
| 0.9653 | 11.0 | 308 | 1.0389 | 0.625 | |
|
| 1.0774 | 12.0 | 336 | 0.8610 | 0.7292 | |
|
| 0.8276 | 13.0 | 364 | 0.9553 | 0.6458 | |
|
| 0.8264 | 14.0 | 392 | 0.9159 | 0.7292 | |
|
| 0.6622 | 15.0 | 420 | 0.6933 | 0.8125 | |
|
| 0.6244 | 16.0 | 448 | 0.6302 | 0.8333 | |
|
| 0.4519 | 17.0 | 476 | 0.6524 | 0.8125 | |
|
| 0.2844 | 18.0 | 504 | 0.8344 | 0.6875 | |
|
| 0.2612 | 19.0 | 532 | 0.7528 | 0.75 | |
|
| 0.1853 | 20.0 | 560 | 1.0809 | 0.6875 | |
|
| 0.1259 | 21.0 | 588 | 1.0479 | 0.7083 | |
|
| 0.0943 | 22.0 | 616 | 0.9220 | 0.7708 | |
|
| 0.0698 | 23.0 | 644 | 1.1877 | 0.7083 | |
|
| 0.0637 | 24.0 | 672 | 1.1921 | 0.7292 | |
|
| 0.0507 | 25.0 | 700 | 1.0526 | 0.7708 | |
|
| 0.047 | 26.0 | 728 | 1.0134 | 0.7917 | |
|
| 0.0406 | 27.0 | 756 | 1.0251 | 0.7708 | |
|
| 0.0313 | 28.0 | 784 | 1.0058 | 0.7917 | |
|
| 0.0303 | 29.0 | 812 | 1.0468 | 0.7917 | |
|
| 0.0251 | 30.0 | 840 | 1.0168 | 0.7917 | |
|
| 0.0259 | 31.0 | 868 | 1.0310 | 0.7917 | |
|
| 0.0216 | 32.0 | 896 | 1.0616 | 0.7917 | |
|
| 0.0228 | 33.0 | 924 | 1.0887 | 0.7917 | |
|
| 0.0208 | 34.0 | 952 | 1.7480 | 0.6458 | |
|
| 0.0203 | 35.0 | 980 | 1.1806 | 0.7708 | |
|
| 0.0209 | 36.0 | 1008 | 1.1815 | 0.7708 | |
|
| 0.0182 | 37.0 | 1036 | 1.1791 | 0.7917 | |
|
| 0.0177 | 38.0 | 1064 | 1.1730 | 0.7917 | |
|
| 0.0165 | 39.0 | 1092 | 1.1680 | 0.7917 | |
|
| 0.0177 | 40.0 | 1120 | 1.1642 | 0.7917 | |
|
| 0.0176 | 41.0 | 1148 | 1.1636 | 0.7917 | |
|
| 0.0168 | 42.0 | 1176 | 1.1638 | 0.7917 | |
|
| 0.0166 | 43.0 | 1204 | 1.1639 | 0.7917 | |
|
| 0.0149 | 44.0 | 1232 | 1.1636 | 0.7917 | |
|
| 0.0149 | 45.0 | 1260 | 1.1637 | 0.7917 | |
|
|
|
|
|
### Framework versions |
|
|
|
- Transformers 4.20.1 |
|
- Pytorch 1.11.0 |
|
- Datasets 2.1.0 |
|
- Tokenizers 0.12.1 |
|
|