yuval6967 commited on
Commit
2b62638
1 Parent(s): 1f2e177

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +34 -8
README.md CHANGED
@@ -4,9 +4,24 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - marsyas/gtzan
 
 
7
  model-index:
8
  - name: distilhubert-finetuned-gtzan
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -16,13 +31,8 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
18
  It achieves the following results on the evaluation set:
19
- - eval_loss: 1.9915
20
- - eval_accuracy: 0.31
21
- - eval_runtime: 8.0771
22
- - eval_samples_per_second: 12.381
23
- - eval_steps_per_second: 0.867
24
- - epoch: 1.0
25
- - step: 57
26
 
27
  ## Model description
28
 
@@ -50,6 +60,22 @@ The following hyperparameters were used during training:
50
  - lr_scheduler_warmup_ratio: 0.1
51
  - num_epochs: 10
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ### Framework versions
54
 
55
  - Transformers 4.31.0.dev0
 
4
  - generated_from_trainer
5
  datasets:
6
  - marsyas/gtzan
7
+ metrics:
8
+ - accuracy
9
  model-index:
10
  - name: distilhubert-finetuned-gtzan
11
+ results:
12
+ - task:
13
+ name: Audio Classification
14
+ type: audio-classification
15
+ dataset:
16
+ name: GTZAN
17
+ type: marsyas/gtzan
18
+ config: all
19
+ split: train
20
+ args: all
21
+ metrics:
22
+ - name: Accuracy
23
+ type: accuracy
24
+ value: 0.84
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  This model is a fine-tuned version of [ntu-spml/distilhubert](https://huggingface.co/ntu-spml/distilhubert) on the GTZAN dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.5858
35
+ - Accuracy: 0.84
 
 
 
 
 
36
 
37
  ## Model description
38
 
 
60
  - lr_scheduler_warmup_ratio: 0.1
61
  - num_epochs: 10
62
 
63
+ ### Training results
64
+
65
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
66
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
67
+ | 2.1135 | 1.0 | 57 | 2.0010 | 0.35 |
68
+ | 1.534 | 2.0 | 114 | 1.4522 | 0.64 |
69
+ | 1.1237 | 3.0 | 171 | 1.0933 | 0.73 |
70
+ | 0.9954 | 4.0 | 228 | 0.9852 | 0.77 |
71
+ | 0.7052 | 5.0 | 285 | 0.7870 | 0.83 |
72
+ | 0.6404 | 6.0 | 342 | 0.7186 | 0.79 |
73
+ | 0.5386 | 7.0 | 399 | 0.6662 | 0.83 |
74
+ | 0.4455 | 8.0 | 456 | 0.6262 | 0.84 |
75
+ | 0.387 | 9.0 | 513 | 0.5934 | 0.86 |
76
+ | 0.3174 | 10.0 | 570 | 0.5858 | 0.84 |
77
+
78
+
79
  ### Framework versions
80
 
81
  - Transformers 4.31.0.dev0