ivanlau commited on
Commit
d3c1869
1 Parent(s): d569102

added eval.sh ad eval results; added robust speech event tag

Browse files
.gitignore CHANGED
@@ -1 +1,2 @@
1
- checkpoint-*/
 
 
1
+ checkpoint-*/
2
+ log*
.ipynb_checkpoints/README-checkpoint.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - zh-HK
4
+ license: apache-2.0
5
+ tags:
6
+ - automatic-speech-recognition
7
+ - mozilla-foundation/common_voice_8_0
8
+ - generated_from_trainer
9
+ - zh-HK
10
+ - robust-speech-event
11
+ datasets:
12
+ - common_voice
13
+ model-index:
14
+ - name: ''
15
+ results: []
16
+ ---
17
+
18
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
19
+ should probably proofread and complete it, then remove this comment. -->
20
+
21
+ #
22
+
23
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - ZH-HK dataset.
24
+ It achieves the following results on the evaluation set:
25
+ - Loss: 2.6726
26
+ - Wer: 0.9815
27
+
28
+ ## Model description
29
+
30
+ More information needed
31
+
32
+ ## Intended uses & limitations
33
+
34
+ More information needed
35
+
36
+ ## Training and evaluation data
37
+
38
+ More information needed
39
+
40
+ ## Training procedure
41
+
42
+ ### Training hyperparameters
43
+
44
+ The following hyperparameters were used during training:
45
+ - learning_rate: 0.0003
46
+ - train_batch_size: 32
47
+ - eval_batch_size: 16
48
+ - seed: 42
49
+ - gradient_accumulation_steps: 2
50
+ - total_train_batch_size: 64
51
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
+ - lr_scheduler_type: linear
53
+ - lr_scheduler_warmup_steps: 500
54
+ - num_epochs: 10.0
55
+ - mixed_precision_training: Native AMP
56
+
57
+ ### Training results
58
+
59
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
60
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
61
+ | No log | 1.0 | 183 | 47.8442 | 1.0 |
62
+ | No log | 2.0 | 366 | 6.3109 | 1.0 |
63
+ | 41.8902 | 3.0 | 549 | 6.2392 | 1.0 |
64
+ | 41.8902 | 4.0 | 732 | 5.9739 | 1.1123 |
65
+ | 41.8902 | 5.0 | 915 | 4.9014 | 1.9474 |
66
+ | 5.5817 | 6.0 | 1098 | 3.9892 | 1.0188 |
67
+ | 5.5817 | 7.0 | 1281 | 3.5080 | 1.0104 |
68
+ | 5.5817 | 8.0 | 1464 | 3.0797 | 0.9905 |
69
+ | 3.5579 | 9.0 | 1647 | 2.8111 | 0.9836 |
70
+ | 3.5579 | 10.0 | 1830 | 2.6726 | 0.9815 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 4.17.0.dev0
76
+ - Pytorch 1.10.2+cu102
77
+ - Datasets 1.18.3
78
+ - Tokenizers 0.11.0
.ipynb_checkpoints/eval-checkpoint.sh ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ python eval.py \
2
+ --model_id="ivanlau/wav2vec2-large-xls-r-300m-cantonese" \
3
+ --dataset="speech-recognition-community-v2/dev_data" \
4
+ --config="zh-HK" \
5
+ --split="validation" \
6
+ --chunk_length_s="5.0" \
7
+ --stride_length_s="1.0" \
8
+ --log_outputs \
.ipynb_checkpoints/run-checkpoint.sh CHANGED
@@ -4,8 +4,7 @@ python run_speech_recognition_ctc.py \
4
  --dataset_config_name="zh-HK" \
5
  --output_dir="./" \
6
  --cache_dir="../container_0" \
7
- --overwrite_output_dir \
8
- --num_train_epochs="10" \
9
  --per_device_train_batch_size="32" \
10
  --per_device_eval_batch_size="16" \
11
  --gradient_accumulation_steps="2" \
 
4
  --dataset_config_name="zh-HK" \
5
  --output_dir="./" \
6
  --cache_dir="../container_0" \
7
+ --num_train_epochs="90" \
 
8
  --per_device_train_batch_size="32" \
9
  --per_device_eval_batch_size="16" \
10
  --gradient_accumulation_steps="2" \
.ipynb_checkpoints/speech-recognition-community-v2_dev_data_zh-HK_validation_eval_results-checkpoint.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ WER: 1.0
2
+ CER: 0.7386630836412496
README.md CHANGED
@@ -6,6 +6,8 @@ tags:
6
  - automatic-speech-recognition
7
  - mozilla-foundation/common_voice_8_0
8
  - generated_from_trainer
 
 
9
  datasets:
10
  - common_voice
11
  model-index:
 
6
  - automatic-speech-recognition
7
  - mozilla-foundation/common_voice_8_0
8
  - generated_from_trainer
9
+ - zh-HK
10
+ - robust-speech-event
11
  datasets:
12
  - common_voice
13
  model-index:
eval.sh ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ python eval.py \
2
+ --model_id="ivanlau/wav2vec2-large-xls-r-300m-cantonese" \
3
+ --dataset="speech-recognition-community-v2/dev_data" \
4
+ --config="zh-HK" \
5
+ --split="validation" \
6
+ --chunk_length_s="5.0" \
7
+ --stride_length_s="1.0" \
8
+ --log_outputs \
run.sh CHANGED
@@ -4,8 +4,7 @@ python run_speech_recognition_ctc.py \
4
  --dataset_config_name="zh-HK" \
5
  --output_dir="./" \
6
  --cache_dir="../container_0" \
7
- --overwrite_output_dir \
8
- --num_train_epochs="10" \
9
  --per_device_train_batch_size="32" \
10
  --per_device_eval_batch_size="16" \
11
  --gradient_accumulation_steps="2" \
 
4
  --dataset_config_name="zh-HK" \
5
  --output_dir="./" \
6
  --cache_dir="../container_0" \
7
+ --num_train_epochs="90" \
 
8
  --per_device_train_batch_size="32" \
9
  --per_device_eval_batch_size="16" \
10
  --gradient_accumulation_steps="2" \
speech-recognition-community-v2_dev_data_zh-HK_validation_eval_results.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ WER: 1.0
2
+ CER: 0.7386630836412496