arun100 commited on
Commit
7972beb
1 Parent(s): 194831f

Model save

Browse files
Files changed (3) hide show
  1. README.md +83 -0
  2. generation_config.json +13 -0
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,83 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: xmzhu/whisper-tiny-zh
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - common_voice_16_0
8
+ metrics:
9
+ - wer
10
+ model-index:
11
+ - name: xmzhu/whisper-tiny-zh
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: common_voice_16_0
18
+ type: common_voice_16_0
19
+ config: zh-CN
20
+ split: test
21
+ args: zh-CN
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 91.15267507612005
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # xmzhu/whisper-tiny-zh
32
+
33
+ This model is a fine-tuned version of [xmzhu/whisper-tiny-zh](https://huggingface.co/xmzhu/whisper-tiny-zh) on the common_voice_16_0 dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.5744
36
+ - Wer: 91.1527
37
+
38
+ ## Model description
39
+
40
+ More information needed
41
+
42
+ ## Intended uses & limitations
43
+
44
+ More information needed
45
+
46
+ ## Training and evaluation data
47
+
48
+ More information needed
49
+
50
+ ## Training procedure
51
+
52
+ ### Training hyperparameters
53
+
54
+ The following hyperparameters were used during training:
55
+ - learning_rate: 5e-07
56
+ - train_batch_size: 32
57
+ - eval_batch_size: 32
58
+ - seed: 42
59
+ - gradient_accumulation_steps: 2
60
+ - total_train_batch_size: 64
61
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
62
+ - lr_scheduler_type: linear
63
+ - lr_scheduler_warmup_steps: 200
64
+ - training_steps: 1000
65
+ - mixed_precision_training: Native AMP
66
+
67
+ ### Training results
68
+
69
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
70
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
+ | 0.6689 | 0.2 | 200 | 0.5854 | 91.6311 |
72
+ | 0.6314 | 1.07 | 400 | 0.5791 | 91.1788 |
73
+ | 0.653 | 1.27 | 600 | 0.5759 | 91.1266 |
74
+ | 0.699 | 2.13 | 800 | 0.5749 | 91.2049 |
75
+ | 0.5613 | 3.0 | 1000 | 0.5744 | 91.1527 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - Transformers 4.37.0.dev0
81
+ - Pytorch 2.1.2+cu121
82
+ - Datasets 2.16.2.dev0
83
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "begin_suppress_tokens": [
3
+ 220,
4
+ 50257
5
+ ],
6
+ "bos_token_id": 50257,
7
+ "decoder_start_token_id": 50258,
8
+ "eos_token_id": 50257,
9
+ "max_length": 448,
10
+ "pad_token_id": 50257,
11
+ "transformers_version": "4.37.0.dev0",
12
+ "use_cache": false
13
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9e1e5bac034e1a6ba9d25abf372f911cc77aff7255a02e7b46077968f38eb037
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:47f2500db40e98322da6eb19901c6c00473a61b3836b7fcdebb5725c85f3a84e
3
  size 151061672