haseong8012 commited on
Commit
456c228
1 Parent(s): 3d01162

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +19 -11
README.md CHANGED
@@ -2,30 +2,27 @@
2
  language:
3
  - ko
4
  license: apache-2.0
5
- base_model: openai/whisper-small
6
  tags:
7
  - hf-asr-leaderboard
8
  - generated_from_trainer
9
  datasets:
10
  - haseong8012/child-50k
11
  model-index:
12
- - name: whisper-small_child-50k2
13
  results: []
14
- metrics:
15
- - wer
16
- - cer
17
  ---
18
 
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
  should probably proofread and complete it, then remove this comment. -->
21
 
22
- # whisper-small-fineTuned_By_korean-child-command-voice_train-0-10000_smaplingRate-16000-aag-test1
23
 
24
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the haseong8012/child-50k dataset.
25
 
26
- - eval_loss: 0.01577906496822834
27
- - eval_cer: 0.5696816412724527
28
- - eval_wer: 1.2814293799013663
29
 
30
  ## Model description
31
 
@@ -44,7 +41,18 @@ More information needed
44
  ### Training hyperparameters
45
 
46
  The following hyperparameters were used during training:
47
-
 
 
 
 
 
 
 
48
 
49
  ### Framework versions
50
 
 
 
 
 
 
2
  language:
3
  - ko
4
  license: apache-2.0
5
+ base_model: openai/whisper-tiny
6
  tags:
7
  - hf-asr-leaderboard
8
  - generated_from_trainer
9
  datasets:
10
  - haseong8012/child-50k
11
  model-index:
12
+ - name: whisper_compare/whisper-small_child-50k2/checkpoint-8000
13
  results: []
 
 
 
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
+ # whisper_compare/whisper-small_child-50k2/checkpoint-8000
20
 
21
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the haseong8012/child-50k dataset.
22
 
23
+ - loss: 0.01577906496822834
24
+ - wer: 1.2814293799013663
25
+ - cer: 0.5696816412724527
26
 
27
  ## Model description
28
 
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 3.75e-05
45
+ - train_batch_size: 32
46
+ - eval_batch_size: 16
47
+ - seed: 42
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - lr_scheduler_warmup_steps: 500
51
+ - training_steps: 5000
52
 
53
  ### Framework versions
54
 
55
+ - Transformers 4.34.0
56
+ - Pytorch 2.1.0+cu121
57
+ - Datasets 2.14.5
58
+ - Tokenizers 0.14.1