Na0s commited on
Commit
727f7ca
1 Parent(s): 974e660

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -5
README.md CHANGED
@@ -7,9 +7,13 @@ tags:
7
  - generated_from_trainer
8
  datasets:
9
  - medical_data
 
10
  model-index:
11
  - name: med-whisper-large-final
12
  results: []
 
 
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -22,15 +26,15 @@ This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingf
22
 
23
  ## Model description
24
 
25
- More information needed
26
 
27
  ## Intended uses & limitations
28
 
29
- More information needed
30
 
31
  ## Training and evaluation data
32
 
33
- More information needed
34
 
35
  ## Training procedure
36
 
@@ -49,7 +53,7 @@ The following hyperparameters were used during training:
49
  - training_steps: 500
50
  - mixed_precision_training: Native AMP
51
 
52
- ### Training results
53
 
54
 
55
 
@@ -58,4 +62,4 @@ The following hyperparameters were used during training:
58
  - Transformers 4.42.4
59
  - Pytorch 2.3.1+cu121
60
  - Datasets 2.20.0
61
- - Tokenizers 0.19.1
 
7
  - generated_from_trainer
8
  datasets:
9
  - medical_data
10
+ - Na0s/Primock_med
11
  model-index:
12
  - name: med-whisper-large-final
13
  results: []
14
+ metrics:
15
+ - cer
16
+ - wer
17
  ---
18
 
19
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
26
 
27
  ## Model description
28
 
29
+ Fine tuned version of whisper-large-v3 through transfer learning on Doctor/Patient consultations
30
 
31
  ## Intended uses & limitations
32
 
33
+ Medical transcription
34
 
35
  ## Training and evaluation data
36
 
37
+ Na0s/Primock_med
38
 
39
  ## Training procedure
40
 
 
53
  - training_steps: 500
54
  - mixed_precision_training: Native AMP
55
 
56
+ ### Performace:
57
 
58
 
59
 
 
62
  - Transformers 4.42.4
63
  - Pytorch 2.3.1+cu121
64
  - Datasets 2.20.0
65
+ - Tokenizers 0.19.1