Bgeorge commited on
Commit
926bf94
1 Parent(s): 37bf4c7

End of training

Browse files
README.md CHANGED
@@ -1,14 +1,11 @@
1
  ---
2
  library_name: transformers
3
- license: bsd-3-clause
4
- base_model: MIT/ast-finetuned-audioset-10-10-0.4593
5
  tags:
6
  - generated_from_trainer
7
  metrics:
8
  - accuracy
9
- - precision
10
- - recall
11
- - f1
12
  model-index:
13
  - name: model_dialect
14
  results: []
@@ -19,13 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  # model_dialect
21
 
22
- This model is a fine-tuned version of [MIT/ast-finetuned-audioset-10-10-0.4593](https://huggingface.co/MIT/ast-finetuned-audioset-10-10-0.4593) on an unknown dataset.
23
  It achieves the following results on the evaluation set:
24
- - Loss: 0.5314
25
- - Accuracy: 0.8037
26
- - Precision: 0.8089
27
- - Recall: 0.8091
28
- - F1: 0.8082
29
 
30
  ## Model description
31
 
@@ -44,37 +38,42 @@ More information needed
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
- - learning_rate: 5e-05
48
- - train_batch_size: 8
49
- - eval_batch_size: 8
50
  - seed: 42
51
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
 
 
52
  - lr_scheduler_type: linear
53
- - num_epochs: 14
 
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
58
- |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
59
- | 1.3748 | 1.0 | 217 | 1.3559 | 0.4042 | 0.2718 | 0.3728 | 0.2601 |
60
- | 1.2563 | 2.0 | 434 | 1.0753 | 0.5381 | 0.7306 | 0.5138 | 0.5170 |
61
- | 1.067 | 3.0 | 651 | 1.0822 | 0.5358 | 0.7650 | 0.5140 | 0.4983 |
62
- | 0.9445 | 4.0 | 868 | 0.8666 | 0.6328 | 0.6747 | 0.6382 | 0.6455 |
63
- | 0.937 | 5.0 | 1085 | 0.8759 | 0.6651 | 0.7032 | 0.6717 | 0.6599 |
64
- | 0.9265 | 6.0 | 1302 | 0.7807 | 0.6882 | 0.7358 | 0.6888 | 0.6943 |
65
- | 0.6593 | 7.0 | 1519 | 0.7010 | 0.7159 | 0.7489 | 0.7195 | 0.7272 |
66
- | 0.7836 | 8.0 | 1736 | 0.6679 | 0.7252 | 0.7633 | 0.7274 | 0.7356 |
67
- | 0.5569 | 9.0 | 1953 | 0.6159 | 0.7552 | 0.7826 | 0.7561 | 0.7592 |
68
- | 0.637 | 10.0 | 2170 | 0.6615 | 0.7436 | 0.7673 | 0.7589 | 0.7533 |
69
- | 0.6427 | 11.0 | 2387 | 0.5595 | 0.7737 | 0.7864 | 0.7798 | 0.7802 |
70
- | 0.5604 | 12.0 | 2604 | 0.5676 | 0.7852 | 0.7938 | 0.8000 | 0.7931 |
71
- | 0.3484 | 13.0 | 2821 | 0.5606 | 0.7991 | 0.8084 | 0.8130 | 0.8070 |
72
- | 0.514 | 14.0 | 3038 | 0.5314 | 0.8037 | 0.8089 | 0.8091 | 0.8082 |
 
 
73
 
74
 
75
  ### Framework versions
76
 
77
- - Transformers 4.45.1
78
  - Pytorch 2.4.0
79
  - Datasets 3.0.1
80
  - Tokenizers 0.20.0
 
1
  ---
2
  library_name: transformers
3
+ license: apache-2.0
4
+ base_model: facebook/wav2vec2-base
5
  tags:
6
  - generated_from_trainer
7
  metrics:
8
  - accuracy
 
 
 
9
  model-index:
10
  - name: model_dialect
11
  results: []
 
16
 
17
  # model_dialect
18
 
19
+ This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.8038
22
+ - Accuracy: 0.7113
 
 
 
23
 
24
  ## Model description
25
 
 
38
  ### Training hyperparameters
39
 
40
  The following hyperparameters were used during training:
41
+ - learning_rate: 4e-05
42
+ - train_batch_size: 32
43
+ - eval_batch_size: 32
44
  - seed: 42
45
+ - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 128
47
+ - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
+ - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 16
51
 
52
  ### Training results
53
 
54
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
+ |:-------------:|:-------:|:----:|:---------------:|:--------:|
56
+ | 6.4219 | 0.9455 | 13 | 1.5899 | 0.2610 |
57
+ | 6.2904 | 1.9636 | 27 | 1.4556 | 0.4550 |
58
+ | 5.4442 | 2.9818 | 41 | 1.2566 | 0.5219 |
59
+ | 5.0752 | 4.0 | 55 | 1.1670 | 0.5566 |
60
+ | 4.748 | 4.9455 | 68 | 1.0790 | 0.5958 |
61
+ | 4.2202 | 5.9636 | 82 | 1.0372 | 0.6120 |
62
+ | 4.0075 | 6.9818 | 96 | 0.9833 | 0.6397 |
63
+ | 3.5847 | 8.0 | 110 | 0.9311 | 0.6721 |
64
+ | 3.3304 | 8.9455 | 123 | 0.9242 | 0.6420 |
65
+ | 3.2199 | 9.9636 | 137 | 0.8707 | 0.6928 |
66
+ | 2.9659 | 10.9818 | 151 | 0.8680 | 0.6767 |
67
+ | 2.8954 | 12.0 | 165 | 0.8357 | 0.6952 |
68
+ | 2.6402 | 12.9455 | 178 | 0.8325 | 0.7021 |
69
+ | 2.4812 | 13.9636 | 192 | 0.8158 | 0.6998 |
70
+ | 2.4249 | 14.9818 | 206 | 0.8042 | 0.7090 |
71
+ | 2.4249 | 15.1273 | 208 | 0.8038 | 0.7113 |
72
 
73
 
74
  ### Framework versions
75
 
76
+ - Transformers 4.46.0
77
  - Pytorch 2.4.0
78
  - Datasets 3.0.1
79
  - Tokenizers 0.20.0
runs/Oct27_20-22-49_dd2cd04d617b/events.out.tfevents.1730060571.dd2cd04d617b.23.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2495a1114321b4055a5a52f0cbd5b73ce68d481fba3f02f4f3c87656f7d21a36
3
- size 16325
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0bda11316b86b799ea023ab747bfac00fa62a4a68710fb4100e870c27aa9d1d5
3
+ size 16679
runs/Oct27_20-22-49_dd2cd04d617b/events.out.tfevents.1730061278.dd2cd04d617b.23.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f1949be19be2e31d15915d95a9caeaf06bb4055c4ca00700a84dda916d5a876c
3
+ size 411