fsicoli commited on
Commit
cadc738
1 Parent(s): ae53638

End of training

Browse files
README.md CHANGED
@@ -3,11 +3,25 @@ license: apache-2.0
3
  base_model: openai/whisper-large-v3
4
  tags:
5
  - generated_from_trainer
 
 
6
  metrics:
7
  - wer
8
  model-index:
9
  - name: whisper-large-v3-pt-1000h
10
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
11
  ---
12
 
13
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,10 +29,10 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # whisper-large-v3-pt-1000h
17
 
18
- This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 0.5361
21
- - Wer: 0.1130
22
 
23
  ## Model description
24
 
 
3
  base_model: openai/whisper-large-v3
4
  tags:
5
  - generated_from_trainer
6
+ datasets:
7
+ - fsicoli/cv17-fleurs-coraa-mls-ted-alcaim-cf-cdc-lapsbm-lapsmail-sydney-lingualibre-voxforge-tatoeba
8
  metrics:
9
  - wer
10
  model-index:
11
  - name: whisper-large-v3-pt-1000h
12
+ results:
13
+ - task:
14
+ name: Automatic Speech Recognition
15
+ type: automatic-speech-recognition
16
+ dataset:
17
+ name: fsicoli/cv17-fleurs-coraa-mls-ted-alcaim-cf-cdc-lapsbm-lapsmail-sydney-lingualibre-voxforge-tatoeba
18
+ default
19
+ type: fsicoli/cv17-fleurs-coraa-mls-ted-alcaim-cf-cdc-lapsbm-lapsmail-sydney-lingualibre-voxforge-tatoeba
20
+ args: default
21
+ metrics:
22
+ - name: Wer
23
+ type: wer
24
+ value: 0.11132023872721715
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
29
 
30
  # whisper-large-v3-pt-1000h
31
 
32
+ This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the fsicoli/cv17-fleurs-coraa-mls-ted-alcaim-cf-cdc-lapsbm-lapsmail-sydney-lingualibre-voxforge-tatoeba default dataset.
33
  It achieves the following results on the evaluation set:
34
+ - Loss: 0.5576
35
+ - Wer: 0.1113
36
 
37
  ## Model description
38
 
all_results.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.22,
3
+ "eval_loss": 0.5576171875,
4
+ "eval_runtime": 11516.2104,
5
+ "eval_samples": 9467,
6
+ "eval_samples_per_second": 0.822,
7
+ "eval_steps_per_second": 0.051,
8
+ "eval_wer": 0.11132023872721715,
9
+ "train_loss": 0.0028538422119326707,
10
+ "train_runtime": 23952.0519,
11
+ "train_samples": 813653,
12
+ "train_samples_per_second": 109.552,
13
+ "train_steps_per_second": 3.424
14
+ }
eval_results.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.22,
3
+ "eval_loss": 0.5576171875,
4
+ "eval_runtime": 11516.2104,
5
+ "eval_samples": 9467,
6
+ "eval_samples_per_second": 0.822,
7
+ "eval_steps_per_second": 0.051,
8
+ "eval_wer": 0.11132023872721715
9
+ }
runs/Apr11_13-47-41_gpu-model-training/events.out.tfevents.1712989329.gpu-model-training.59563.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:63eb95a857a8378c8010e800a2e6071f716d3173dc14490310b2742e44c5afd1
3
+ size 364
train_results.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 3.22,
3
+ "train_loss": 0.0028538422119326707,
4
+ "train_runtime": 23952.0519,
5
+ "train_samples": 813653,
6
+ "train_samples_per_second": 109.552,
7
+ "train_steps_per_second": 3.424
8
+ }
trainer_state.json ADDED
The diff for this file is too large to render. See raw diff