File size: 3,253 Bytes
bafe36a
a1228af
be1de82
bafe36a
 
 
 
 
 
 
 
 
 
 
 
be1de82
bafe36a
be4c542
 
 
 
 
bafe36a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
be4c542
bafe36a
 
 
f35f35f
 
bafe36a
 
be4c542
be1de82
bafe36a
 
 
 
 
be4c542
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bafe36a
 
 
 
 
 
be4c542
bafe36a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
---
license: mit
base_model: arslanarjumand/wav2vec-reptiles
tags:
- generated_from_trainer
model-index:
- name: wav2vec-reptiles
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# wav2vec-reptiles

This model is a fine-tuned version of [arslanarjumand/wav2vec-reptiles](https://huggingface.co/arslanarjumand/wav2vec-reptiles) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 182.3516
- Pcc Accuracy: 0.6684
- Pcc Fluency: 0.6499
- Pcc Total Score: 0.7110
- Pcc Content: 0.6788

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5.5e-05
- train_batch_size: 4
- eval_batch_size: 6
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.4
- num_epochs: 15

### Training results

| Training Loss | Epoch | Step | Validation Loss | Pcc Accuracy | Pcc Fluency | Pcc Total Score | Pcc Content |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-----------:|:---------------:|:-----------:|
| 2719.4074     | 0.97  | 500  | 2790.7349       | 0.1171       | 0.1116      | 0.1218          | 0.1245      |
| 386.8535      | 1.93  | 1000 | 361.3293        | 0.1481       | 0.1332      | 0.1511          | 0.1445      |
| 273.8093      | 2.9   | 1500 | 304.4040        | 0.2869       | 0.2915      | 0.3062          | 0.2849      |
| 280.8214      | 3.87  | 2000 | 277.9273        | 0.4065       | 0.4344      | 0.4465          | 0.4131      |
| 264.1531      | 4.84  | 2500 | 265.5385        | 0.5012       | 0.5234      | 0.5490          | 0.5117      |
| 211.6362      | 5.8   | 3000 | 226.9335        | 0.5675       | 0.5768      | 0.6171          | 0.5817      |
| 217.8737      | 6.77  | 3500 | 218.1019        | 0.6089       | 0.5984      | 0.6525          | 0.6194      |
| 180.3319      | 7.74  | 4000 | 201.4108        | 0.6296       | 0.6142      | 0.6721          | 0.6395      |
| 174.7695      | 8.7   | 4500 | 201.3474        | 0.6427       | 0.6297      | 0.6872          | 0.6542      |
| 182.4466      | 9.67  | 5000 | 189.6567        | 0.6566       | 0.6333      | 0.6957          | 0.6619      |
| 184.7177      | 10.64 | 5500 | 182.7654        | 0.6628       | 0.6405      | 0.7033          | 0.6713      |
| 174.6915      | 11.61 | 6000 | 181.2284        | 0.6635       | 0.6479      | 0.7077          | 0.6755      |
| 187.671       | 12.57 | 6500 | 180.5753        | 0.6676       | 0.6486      | 0.7099          | 0.6773      |
| 166.4409      | 13.54 | 7000 | 181.2506        | 0.6682       | 0.6493      | 0.7105          | 0.6781      |
| 176.7043      | 14.51 | 7500 | 182.3516        | 0.6684       | 0.6499      | 0.7110          | 0.6788      |


### Framework versions

- Transformers 4.37.0
- Pytorch 2.1.2
- Datasets 2.18.0
- Tokenizers 0.15.1