StefanJevtic63 commited on
Commit
1fbd8f6
1 Parent(s): 953ff8d

End of training

Browse files
README.md CHANGED
@@ -1,24 +1,24 @@
1
  ---
2
  library_name: peft
 
 
3
  license: apache-2.0
4
  base_model: openai/whisper-large-v2
5
  tags:
6
  - generated_from_trainer
7
- datasets:
8
- - common_voice_17_0
9
  model-index:
10
- - name: whisper-large-v2-sr-lora
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
- # whisper-large-v2-sr-lora
18
 
19
- This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the common_voice_17_0 dataset.
20
  It achieves the following results on the evaluation set:
21
- - Loss: 0.3487
22
 
23
  ## Model description
24
 
@@ -45,15 +45,18 @@ The following hyperparameters were used during training:
45
  - total_train_batch_size: 16
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
- - lr_scheduler_warmup_steps: 5
49
- - training_steps: 25
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:------:|:----:|:---------------:|
56
- | 0.296 | 0.2128 | 25 | 0.3487 |
 
 
 
57
 
58
 
59
  ### Framework versions
 
1
  ---
2
  library_name: peft
3
+ language:
4
+ - sr
5
  license: apache-2.0
6
  base_model: openai/whisper-large-v2
7
  tags:
8
  - generated_from_trainer
 
 
9
  model-index:
10
+ - name: Whisper - Serbian Model
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # Whisper - Serbian Model
18
 
19
+ This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1401
22
 
23
  ## Model description
24
 
 
45
  - total_train_batch_size: 16
46
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
47
  - lr_scheduler_type: linear
48
+ - lr_scheduler_warmup_steps: 50
49
+ - training_steps: 4000
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss |
55
  |:-------------:|:------:|:----:|:---------------:|
56
+ | 0.2015 | 0.0705 | 1000 | 0.2007 |
57
+ | 0.1646 | 0.1409 | 2000 | 0.1743 |
58
+ | 0.1475 | 0.2114 | 3000 | 0.1515 |
59
+ | 0.1381 | 0.2819 | 4000 | 0.1401 |
60
 
61
 
62
  ### Framework versions
adapter_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:5ee523d04fe63b028657ecd7a397da65abfab73f39137fb71787634d0af8eb8a
3
  size 63056714
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3b0fbfede35120ce1281233819d8c18ea0aad708f896a407e2c93abc69e3059d
3
  size 63056714
runs/Dec03_01-43-54_DESKTOP-1TFDHRE/events.out.tfevents.1733186636.DESKTOP-1TFDHRE.15736.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c932ed84bc1f53083d59ddcae9e3b0d7cdcff20968305a82e949b2001563ff86
3
- size 32542
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9aa47d03a3b0dc5c5e5e9a5911ad32d91c55ab65e6acb081b0e8f76473f054c1
3
+ size 41607