JulioCastro commited on
Commit
01a95e4
·
1 Parent(s): d1f2760

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -12
README.md CHANGED
@@ -1,12 +1,38 @@
1
- will edit later:
2
-
3
- oading best model from ./checkpoint-1000 (score: 10.968809748023856).
4
- TrainOutput(global_step=1000, training_loss=0.25546978759765626, metrics={'train_runtime': 62087.7615, 'train_samples_per_second': 1.031, 'train_steps_per_second': 0.016, 'total_flos': 6.531871408128e+19, 'train_loss': 0.25546978759765626, 'epoch': 1.0})
5
-
6
- dataset_tags": "mozilla-foundation/common_voice_11_0",
7
- "dataset": "Common Voice 11.0",
8
- "language": "es",
9
- "model_name": "Whisper Md Ca - 1k",
10
- "finetuned_from": "openai/whisper-medium",
11
- "tasks": "automatic-speech-recognition",
12
- "tags": "whisper-event",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Whisper Md Ca - 1k
2
+ This model is a fine-tuned version of openai/whisper-medium on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
3
+
4
+ Loss: 0.2554
5
+ Wer: 10.9688
6
+ Model description
7
+ More information needed
8
+
9
+ Intended uses & limitations
10
+ More information needed
11
+
12
+ Training and evaluation data
13
+ More information needed
14
+
15
+ Training procedure
16
+ Training hyperparameters
17
+ The following hyperparameters were used during training:
18
+
19
+ learning_rate: 1e-05
20
+ train_batch_size: 32
21
+ eval_batch_size: 8
22
+ seed: 42
23
+ gradient_accumulation_steps: 2
24
+ total_train_batch_size: 64
25
+ optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
26
+ lr_scheduler_type: linear
27
+ lr_scheduler_warmup_steps: 100
28
+ training_steps: 1000
29
+ mixed_precision_training: Native AMP
30
+
31
+ Training results
32
+ Training Loss Epoch Step Validation Loss Wer
33
+ 0.2554 1.0 1000 0.2554 10.9688
34
+ Framework versions
35
+ Transformers 4.26.0.dev0
36
+ Pytorch 1.13.1+cu117
37
+ Datasets 2.7.1.dev0
38
+ Tokenizers 0.13.2