arslanarjumand
commited on
Commit
•
be4c542
1
Parent(s):
f35f35f
arslanarjumand/wav2vec-read-aloud
Browse files- README.md +23 -16
- config.json +2 -2
- model.safetensors +2 -2
- training_args.bin +2 -2
README.md
CHANGED
@@ -15,11 +15,11 @@ should probably proofread and complete it, then remove this comment. -->
|
|
15 |
|
16 |
This model is a fine-tuned version of [arslanarjumand/wav2vec-reptiles](https://huggingface.co/arslanarjumand/wav2vec-reptiles) on the None dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
-
- Loss:
|
19 |
-
- Pcc Accuracy: 0.
|
20 |
-
- Pcc Fluency: 0.
|
21 |
-
- Pcc Total Score: 0.
|
22 |
-
- Pcc Content: 0.
|
23 |
|
24 |
## Model description
|
25 |
|
@@ -38,7 +38,7 @@ More information needed
|
|
38 |
### Training hyperparameters
|
39 |
|
40 |
The following hyperparameters were used during training:
|
41 |
-
- learning_rate:
|
42 |
- train_batch_size: 4
|
43 |
- eval_batch_size: 6
|
44 |
- seed: 42
|
@@ -46,26 +46,33 @@ The following hyperparameters were used during training:
|
|
46 |
- total_train_batch_size: 16
|
47 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
48 |
- lr_scheduler_type: cosine
|
49 |
-
- lr_scheduler_warmup_ratio: 0.
|
50 |
- num_epochs: 15
|
51 |
-
- mixed_precision_training: Native AMP
|
52 |
|
53 |
### Training results
|
54 |
|
55 |
| Training Loss | Epoch | Step | Validation Loss | Pcc Accuracy | Pcc Fluency | Pcc Total Score | Pcc Content |
|
56 |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-----------:|:---------------:|:-----------:|
|
57 |
-
|
|
58 |
-
|
|
59 |
-
|
|
60 |
-
|
|
61 |
-
|
|
62 |
-
|
|
63 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
64 |
|
65 |
|
66 |
### Framework versions
|
67 |
|
68 |
- Transformers 4.37.0
|
69 |
- Pytorch 2.1.2
|
70 |
-
- Datasets 2.
|
71 |
- Tokenizers 0.15.1
|
|
|
15 |
|
16 |
This model is a fine-tuned version of [arslanarjumand/wav2vec-reptiles](https://huggingface.co/arslanarjumand/wav2vec-reptiles) on the None dataset.
|
17 |
It achieves the following results on the evaluation set:
|
18 |
+
- Loss: 182.3516
|
19 |
+
- Pcc Accuracy: 0.6684
|
20 |
+
- Pcc Fluency: 0.6499
|
21 |
+
- Pcc Total Score: 0.7110
|
22 |
+
- Pcc Content: 0.6788
|
23 |
|
24 |
## Model description
|
25 |
|
|
|
38 |
### Training hyperparameters
|
39 |
|
40 |
The following hyperparameters were used during training:
|
41 |
+
- learning_rate: 5.5e-05
|
42 |
- train_batch_size: 4
|
43 |
- eval_batch_size: 6
|
44 |
- seed: 42
|
|
|
46 |
- total_train_batch_size: 16
|
47 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
48 |
- lr_scheduler_type: cosine
|
49 |
+
- lr_scheduler_warmup_ratio: 0.4
|
50 |
- num_epochs: 15
|
|
|
51 |
|
52 |
### Training results
|
53 |
|
54 |
| Training Loss | Epoch | Step | Validation Loss | Pcc Accuracy | Pcc Fluency | Pcc Total Score | Pcc Content |
|
55 |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:-----------:|:---------------:|:-----------:|
|
56 |
+
| 2719.4074 | 0.97 | 500 | 2790.7349 | 0.1171 | 0.1116 | 0.1218 | 0.1245 |
|
57 |
+
| 386.8535 | 1.93 | 1000 | 361.3293 | 0.1481 | 0.1332 | 0.1511 | 0.1445 |
|
58 |
+
| 273.8093 | 2.9 | 1500 | 304.4040 | 0.2869 | 0.2915 | 0.3062 | 0.2849 |
|
59 |
+
| 280.8214 | 3.87 | 2000 | 277.9273 | 0.4065 | 0.4344 | 0.4465 | 0.4131 |
|
60 |
+
| 264.1531 | 4.84 | 2500 | 265.5385 | 0.5012 | 0.5234 | 0.5490 | 0.5117 |
|
61 |
+
| 211.6362 | 5.8 | 3000 | 226.9335 | 0.5675 | 0.5768 | 0.6171 | 0.5817 |
|
62 |
+
| 217.8737 | 6.77 | 3500 | 218.1019 | 0.6089 | 0.5984 | 0.6525 | 0.6194 |
|
63 |
+
| 180.3319 | 7.74 | 4000 | 201.4108 | 0.6296 | 0.6142 | 0.6721 | 0.6395 |
|
64 |
+
| 174.7695 | 8.7 | 4500 | 201.3474 | 0.6427 | 0.6297 | 0.6872 | 0.6542 |
|
65 |
+
| 182.4466 | 9.67 | 5000 | 189.6567 | 0.6566 | 0.6333 | 0.6957 | 0.6619 |
|
66 |
+
| 184.7177 | 10.64 | 5500 | 182.7654 | 0.6628 | 0.6405 | 0.7033 | 0.6713 |
|
67 |
+
| 174.6915 | 11.61 | 6000 | 181.2284 | 0.6635 | 0.6479 | 0.7077 | 0.6755 |
|
68 |
+
| 187.671 | 12.57 | 6500 | 180.5753 | 0.6676 | 0.6486 | 0.7099 | 0.6773 |
|
69 |
+
| 166.4409 | 13.54 | 7000 | 181.2506 | 0.6682 | 0.6493 | 0.7105 | 0.6781 |
|
70 |
+
| 176.7043 | 14.51 | 7500 | 182.3516 | 0.6684 | 0.6499 | 0.7110 | 0.6788 |
|
71 |
|
72 |
|
73 |
### Framework versions
|
74 |
|
75 |
- Transformers 4.37.0
|
76 |
- Pytorch 2.1.2
|
77 |
+
- Datasets 2.18.0
|
78 |
- Tokenizers 0.15.1
|
config.json
CHANGED
@@ -11,7 +11,7 @@
|
|
11 |
],
|
12 |
"attention_dropout": 0.0094,
|
13 |
"bos_token_id": 1,
|
14 |
-
"classifier_proj_size":
|
15 |
"codevector_dim": 768,
|
16 |
"conformer_conv_dropout": 0.1,
|
17 |
"contrastive_logits_temperature": 0.1,
|
@@ -56,7 +56,7 @@
|
|
56 |
"num_attention_heads": 16,
|
57 |
"num_codevector_groups": 2,
|
58 |
"num_codevectors_per_group": 320,
|
59 |
-
"num_hidden_layers":
|
60 |
"num_negatives": 100,
|
61 |
"output_hidden_size": 1024,
|
62 |
"pad_token_id": 0,
|
|
|
11 |
],
|
12 |
"attention_dropout": 0.0094,
|
13 |
"bos_token_id": 1,
|
14 |
+
"classifier_proj_size": 100,
|
15 |
"codevector_dim": 768,
|
16 |
"conformer_conv_dropout": 0.1,
|
17 |
"contrastive_logits_temperature": 0.1,
|
|
|
56 |
"num_attention_heads": 16,
|
57 |
"num_codevector_groups": 2,
|
58 |
"num_codevectors_per_group": 320,
|
59 |
+
"num_hidden_layers": 8,
|
60 |
"num_negatives": 100,
|
61 |
"output_hidden_size": 1024,
|
62 |
"pad_token_id": 0,
|
model.safetensors
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4616ad557e7adb6f769d2533776c1da4db84a1246ef782350b8b429dfe0ea901
|
3 |
+
size 794371536
|
training_args.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:160842d9ad9fb8aa42c24765e59cfa16095af1c479acc4c5f26155826d12c7d9
|
3 |
+
size 4728
|