MatsRooth commited on
Commit
2033682
1 Parent(s): 58e7af4

README.md and .gitattributes after training

Browse files
Files changed (1) hide show
  1. README.md +13 -55
README.md CHANGED
@@ -21,61 +21,19 @@ It achieves the following results on the evaluation set:
21
  - Loss: 0.1385
22
  - Accuracy: 0.9962
23
 
24
- MatsRooth/down_on is the part of [superb ks](https://huggingface.co/datasets/superb)
25
- with the labels *down* and *on*.
26
- Superb ks is in turn derived from (Speech Commands dataset v1.0)[https://www.tensorflow.org/datasets/catalog/speech_commands].
27
- Train/validation/test splits are as in superb ks.
28
 
29
- ## Intended uses
30
 
31
- MatsRooth/down_on and this model exercise methodology for creating an audio classification dataset from
32
- local directory structures and audio files, and check whether fine tuning wav2vec2 classification with two labels works
33
- well.
 
 
 
 
34
 
35
  ## Training procedure
36
- Training used 'sbatch' on a cluster and the program [run_audio_classification.py](https://github.com/huggingface/transformers).
37
- 'down_on.sub' is below, start it with 'sbatch down_on.sub'.
38
-
39
- '''
40
- #!/bin/bash
41
- #SBATCH -J down_on # Job name
42
- #SBATCH -o down_on_%j.out # Name of stdout output log file (%j expands to jobID)
43
- #SBATCH -e down_on_%j.err # Name of stderr output log file (%j expands to jobID)
44
- #SBATCH -N 1 # Total number of nodes requested
45
- #SBATCH -n 1 # Total number of cores requested
46
- #SBATCH --mem=5000 # Total amount of (real) memory requested (per node)
47
- #SBATCH -t 10:00:00 # Time limit (hh:mm:ss)
48
- #SBATCH --partition=gpu # Request partition for resource allocation
49
- #SBATCH --gres=gpu:1 # Specify a list of generic consumable resources (per node)
50
-
51
- cd ~/ac_h
52
- /home/mr249/env/hugh/bin/python run_audio_classification.py \
53
- --model_name_or_path facebook/wav2vec2-base \
54
- --dataset_name MatsRooth/down_on \
55
- --output_dir wav2vec2-base_down_on \
56
- --overwrite_output_dir \
57
- --remove_unused_columns False \
58
- --do_train \
59
- --do_eval \
60
- --fp16 \
61
- --learning_rate 3e-5 \
62
- --max_length_seconds 1 \
63
- --attention_mask False \
64
- --warmup_ratio 0.1 \
65
- --num_train_epochs 5 \
66
- --per_device_train_batch_size 32 \
67
- --gradient_accumulation_steps 4 \
68
- --per_device_eval_batch_size 32 \
69
- --dataloader_num_workers 1 \
70
- --logging_strategy steps \
71
- --logging_steps 10 \
72
- --evaluation_strategy epoch \
73
- --save_strategy epoch \
74
- --load_best_model_at_end True \
75
- --metric_for_best_model accuracy \
76
- --save_total_limit 3 \
77
- --seed 0
78
- '''
79
 
80
  ### Training hyperparameters
81
 
@@ -96,10 +54,10 @@ The following hyperparameters were used during training:
96
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
97
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
98
  | 0.6089 | 1.0 | 29 | 0.1385 | 0.9962 |
99
- | 0.1297 | 2.0 | 58 | 0.0513 | 0.9962 |
100
- | 0.0835 | 3.0 | 87 | 0.0389 | 0.9885 |
101
- | 0.058 | 4.0 | 116 | 0.0302 | 0.9923 |
102
- | 0.0481 | 5.0 | 145 | 0.0245 | 0.9942 |
103
 
104
 
105
  ### Framework versions
 
21
  - Loss: 0.1385
22
  - Accuracy: 0.9962
23
 
24
+ ## Model description
 
 
 
25
 
26
+ More information needed
27
 
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
 
36
  ## Training procedure
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
37
 
38
  ### Training hyperparameters
39
 
 
54
  | Training Loss | Epoch | Step | Validation Loss | Accuracy |
55
  |:-------------:|:-----:|:----:|:---------------:|:--------:|
56
  | 0.6089 | 1.0 | 29 | 0.1385 | 0.9962 |
57
+ | 0.1289 | 2.0 | 58 | 0.0510 | 0.9962 |
58
+ | 0.0835 | 3.0 | 87 | 0.0433 | 0.9885 |
59
+ | 0.0605 | 4.0 | 116 | 0.0330 | 0.9923 |
60
+ | 0.0479 | 5.0 | 145 | 0.0273 | 0.9904 |
61
 
62
 
63
  ### Framework versions