ummonk commited on
Commit
d988f69
1 Parent(s): ca92e13

Restore new model card

Browse files
Files changed (1) hide show
  1. README.md +21 -38
README.md CHANGED
@@ -32,52 +32,35 @@ should probably proofread and complete it, then remove this comment. -->
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset.
34
  It achieves the following results on the evaluation set:
35
- - Loss: 1.9466
36
  - Accuracy: 0.2708
37
 
38
- ## Model description
39
-
40
- More information needed
41
-
42
- ## Intended uses & limitations
43
-
44
- More information needed
45
-
46
- ## Training and evaluation data
47
-
48
- More information needed
49
-
50
- ## Training procedure
51
-
52
- ### Training hyperparameters
53
-
54
- The following hyperparameters were used during training:
55
- - learning_rate: 5e-05
56
- - train_batch_size: 8
57
- - eval_batch_size: 8
58
- - seed: 42
59
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
60
  - lr_scheduler_type: linear
61
- - lr_scheduler_warmup_ratio: 0.8
62
- - num_epochs: 12
63
  - mixed_precision_training: Native AMP
64
 
65
- ### Training results
66
 
67
- | Training Loss | Epoch | Step | Validation Loss | Accuracy |
68
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
69
- | 2.48 | 1.0 | 48 | 2.4777 | 0.1042 |
70
- | 2.473 | 2.0 | 96 | 2.4604 | 0.1562 |
71
- | 2.4772 | 3.0 | 144 | 2.4282 | 0.1042 |
72
- | 2.3678 | 4.0 | 192 | 2.4007 | 0.1042 |
73
- | 2.324 | 5.0 | 240 | 2.3261 | 0.2083 |
74
- | 2.2489 | 6.0 | 288 | 2.2360 | 0.1771 |
75
- | 1.9909 | 7.0 | 336 | 2.1544 | 0.1875 |
76
- | 1.9903 | 8.0 | 384 | 2.0937 | 0.1875 |
77
- | 2.0668 | 9.0 | 432 | 2.0222 | 0.2083 |
78
- | 1.8473 | 10.0 | 480 | 2.0298 | 0.1875 |
79
- | 1.8068 | 11.0 | 528 | 1.9965 | 0.25 |
80
- | 1.699 | 12.0 | 576 | 1.9466 | 0.2708 |
 
 
81
 
82
 
83
  ### Framework versions
 
32
 
33
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the audiofolder dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 2.0748
36
  - Accuracy: 0.2708
37
 
38
+ ## Model description
39
+ - seed: 42
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
+ - lr_scheduler_warmup_ratio: 0.7
43
+ - num_epochs: 14
44
  - mixed_precision_training: Native AMP
45
 
46
+ ### Training results
47
 
48
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
49
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
50
+ | 2.4778 | 1.0 | 48 | 2.4807 | 0.0938 |
51
+ | 2.4779 | 2.0 | 96 | 2.4651 | 0.1042 |
52
+ | 2.4751 | 3.0 | 144 | 2.4365 | 0.1042 |
53
+ | 2.3777 | 4.0 | 192 | 2.4187 | 0.1042 |
54
+ | 2.3786 | 5.0 | 240 | 2.4050 | 0.1458 |
55
+ | 2.3754 | 6.0 | 288 | 2.3446 | 0.1458 |
56
+ | 2.1556 | 7.0 | 336 | 2.2284 | 0.2083 |
57
+ | 2.1062 | 8.0 | 384 | 2.1533 | 0.2188 |
58
+ | 2.0081 | 9.0 | 432 | 2.0765 | 0.2292 |
59
+ | 1.813 | 10.0 | 480 | 2.0671 | 0.2083 |
60
+ | 1.74 | 11.0 | 528 | 1.9977 | 0.3021 |
61
+ | 1.4795 | 12.0 | 576 | 2.0588 | 0.2396 |
62
+ | 1.298 | 13.0 | 624 | 2.0652 | 0.3021 |
63
+ | 1.2578 | 14.0 | 672 | 2.0748 | 0.2708 |
64
 
65
 
66
  ### Framework versions