NathanRoll commited on
Commit
f1950ee
1 Parent(s): 23c8634

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -5,16 +5,16 @@ license: apache-2.0
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
- - NathanRoll/SBC_single_speaker
9
  model-index:
10
- - name: ProsodPy
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- # ProsodPy
18
 
19
  This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the Santa Barbara Corpus of Spoken American English dataset.
20
 
@@ -36,15 +36,15 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 1e-05
39
- - train_batch_size: 16
40
  - eval_batch_size: 8
41
  - seed: 42
42
- - gradient_accumulation_steps: 4
43
  - total_train_batch_size: 64
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 50
47
- - training_steps: 200
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results
 
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
+ - NathanRoll/SBC_segmented
9
  model-index:
10
+ - name: PSST base
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # PSST base
18
 
19
  This model is a fine-tuned version of [openai/whisper-medium.en](https://huggingface.co/openai/whisper-medium.en) on the Santa Barbara Corpus of Spoken American English dataset.
20
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 1e-05
39
+ - train_batch_size: 32
40
  - eval_batch_size: 8
41
  - seed: 42
42
+ - gradient_accumulation_steps: 2
43
  - total_train_batch_size: 64
44
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
45
  - lr_scheduler_type: linear
46
  - lr_scheduler_warmup_steps: 50
47
+ - training_steps: 400
48
  - mixed_precision_training: Native AMP
49
 
50
  ### Training results