fsicoli commited on
Commit
542335a
1 Parent(s): 3979e75

End of training

Browse files
README.md CHANGED
@@ -5,7 +5,7 @@ base_model: openai/whisper-large-v3
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
- - common_voice_18_0
9
  metrics:
10
  - wer
11
  model-index:
@@ -15,15 +15,15 @@ model-index:
15
  name: Automatic Speech Recognition
16
  type: automatic-speech-recognition
17
  dataset:
18
- name: common_voice_18_0
19
- type: common_voice_18_0
20
  config: pt
21
  split: None
22
  args: pt
23
  metrics:
24
  - name: Wer
25
  type: wer
26
- value: 0.10309096732863549
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -31,10 +31,10 @@ should probably proofread and complete it, then remove this comment. -->
31
 
32
  # whisper-large-v3-pt-3000h-4
33
 
34
- This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the common_voice_18_0 dataset.
35
  It achieves the following results on the evaluation set:
36
- - Loss: 0.3697
37
- - Wer: 0.1031
38
 
39
  ## Model description
40
 
 
5
  tags:
6
  - generated_from_trainer
7
  datasets:
8
+ - fsicoli/common_voice_18_0
9
  metrics:
10
  - wer
11
  model-index:
 
15
  name: Automatic Speech Recognition
16
  type: automatic-speech-recognition
17
  dataset:
18
+ name: fsicoli/common_voice_18_0 pt
19
+ type: fsicoli/common_voice_18_0
20
  config: pt
21
  split: None
22
  args: pt
23
  metrics:
24
  - name: Wer
25
  type: wer
26
+ value: 0.10807174887892376
27
  ---
28
 
29
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
31
 
32
  # whisper-large-v3-pt-3000h-4
33
 
34
+ This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the fsicoli/common_voice_18_0 pt dataset.
35
  It achieves the following results on the evaluation set:
36
+ - Loss: 0.1938
37
+ - Wer: 0.1081
38
 
39
  ## Model description
40
 
all_results.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "eval_loss": 0.19376881420612335,
4
+ "eval_runtime": 8271.5667,
5
+ "eval_samples": 9494,
6
+ "eval_samples_per_second": 1.148,
7
+ "eval_steps_per_second": 0.144,
8
+ "eval_wer": 0.10807174887892376,
9
+ "total_flos": 7.517848352823706e+20,
10
+ "train_loss": 0.018776766294569685,
11
+ "train_runtime": 360719.5114,
12
+ "train_samples": 22116,
13
+ "train_samples_per_second": 0.613,
14
+ "train_steps_per_second": 0.153
15
+ }
eval_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "eval_loss": 0.19376881420612335,
4
+ "eval_runtime": 8271.5667,
5
+ "eval_samples": 9494,
6
+ "eval_samples_per_second": 1.148,
7
+ "eval_steps_per_second": 0.144,
8
+ "eval_wer": 0.10807174887892376
9
+ }
runs/Aug28_14-18-51_DITEC2014063010/events.out.tfevents.1725558105.DITEC2014063010.87064.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8d6cc6a905577cafa8ef12060ba1a29a07b8d0e1a7ed4b8008bc93ce45c8f740
3
+ size 412
train_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 10.0,
3
+ "total_flos": 7.517848352823706e+20,
4
+ "train_loss": 0.018776766294569685,
5
+ "train_runtime": 360719.5114,
6
+ "train_samples": 22116,
7
+ "train_samples_per_second": 0.613,
8
+ "train_steps_per_second": 0.153
9
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff