gary109 commited on
Commit
5695101
1 Parent(s): 0445a66

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -3
README.md CHANGED
@@ -5,6 +5,8 @@ tags:
5
  - generated_from_trainer
6
  datasets:
7
  - mir_st500
 
 
8
  model-index:
9
  - name: wav2vec2-base-mirst500
10
  results: []
@@ -16,6 +18,9 @@ should probably proofread and complete it, then remove this comment. -->
16
  # wav2vec2-base-mirst500
17
 
18
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the /workspace/datasets/datasets/MIR_ST500/MIR_ST500_AUDIO_CLASSIFICATION.py dataset.
 
 
 
19
 
20
  ## Model description
21
 
@@ -36,16 +41,26 @@ More information needed
36
  The following hyperparameters were used during training:
37
  - learning_rate: 3e-05
38
  - train_batch_size: 16
39
- - eval_batch_size: 16
40
  - seed: 0
 
 
41
  - gradient_accumulation_steps: 4
42
- - total_train_batch_size: 64
 
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - lr_scheduler_warmup_ratio: 0.1
46
- - num_epochs: 10.0
47
  - mixed_precision_training: Native AMP
48
 
 
 
 
 
 
 
 
49
  ### Framework versions
50
 
51
  - Transformers 4.15.0
 
5
  - generated_from_trainer
6
  datasets:
7
  - mir_st500
8
+ metrics:
9
+ - accuracy
10
  model-index:
11
  - name: wav2vec2-base-mirst500
12
  results: []
 
18
  # wav2vec2-base-mirst500
19
 
20
  This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the /workspace/datasets/datasets/MIR_ST500/MIR_ST500_AUDIO_CLASSIFICATION.py dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.1219
23
+ - Accuracy: 0.5817
24
 
25
  ## Model description
26
 
 
41
  The following hyperparameters were used during training:
42
  - learning_rate: 3e-05
43
  - train_batch_size: 16
44
+ - eval_batch_size: 1
45
  - seed: 0
46
+ - distributed_type: multi-GPU
47
+ - num_devices: 2
48
  - gradient_accumulation_steps: 4
49
+ - total_train_batch_size: 128
50
+ - total_eval_batch_size: 2
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
53
  - lr_scheduler_warmup_ratio: 0.1
54
+ - num_epochs: 1.0
55
  - mixed_precision_training: Native AMP
56
 
57
+ ### Training results
58
+
59
+ | Training Loss | Epoch | Step | Validation Loss | Accuracy |
60
+ |:-------------:|:-----:|:----:|:---------------:|:--------:|
61
+ | 1.1779 | 1.0 | 1304 | 1.1219 | 0.5817 |
62
+
63
+
64
  ### Framework versions
65
 
66
  - Transformers 4.15.0