Scrya commited on
Commit
2f779db
1 Parent(s): 1684a4c

update model card README.md

Browse files
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - ms_my
4
+ license: apache-2.0
5
+ tags:
6
+ - whisper-event
7
+ - generated_from_trainer
8
+ datasets:
9
+ - google/fleurs
10
+ metrics:
11
+ - wer
12
+ model-index:
13
+ - name: Whisper Medium MS - Augmented
14
+ results:
15
+ - task:
16
+ name: Automatic Speech Recognition
17
+ type: automatic-speech-recognition
18
+ dataset:
19
+ name: google/fleurs
20
+ type: google/fleurs
21
+ config: ms_my
22
+ split: test
23
+ args: ms_my
24
+ metrics:
25
+ - name: Wer
26
+ type: wer
27
+ value: 9.578362255965294
28
+ ---
29
+
30
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
31
+ should probably proofread and complete it, then remove this comment. -->
32
+
33
+ # Whisper Medium MS - Augmented
34
+
35
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the google/fleurs dataset.
36
+ It achieves the following results on the evaluation set:
37
+ - Loss: 0.2066
38
+ - Wer: 9.5784
39
+ - Cer: 2.8109
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 1e-05
59
+ - train_batch_size: 2
60
+ - eval_batch_size: 2
61
+ - seed: 42
62
+ - gradient_accumulation_steps: 16
63
+ - total_train_batch_size: 32
64
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
+ - lr_scheduler_type: linear
66
+ - lr_scheduler_warmup_steps: 100
67
+ - training_steps: 1000
68
+ - mixed_precision_training: Native AMP
69
+
70
+ ### Training results
71
+
72
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
73
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|
74
+ | 0.0876 | 2.15 | 200 | 0.1949 | 10.3105 | 3.0685 |
75
+ | 0.0064 | 4.3 | 400 | 0.1974 | 9.7004 | 2.9596 |
76
+ | 0.0014 | 6.45 | 600 | 0.2031 | 9.6190 | 2.8955 |
77
+ | 0.001 | 8.6 | 800 | 0.2058 | 9.6055 | 2.8440 |
78
+ | 0.0009 | 10.75 | 1000 | 0.2066 | 9.5784 | 2.8109 |
79
+
80
+
81
+ ### Framework versions
82
+
83
+ - Transformers 4.26.0.dev0
84
+ - Pytorch 1.13.0+cu117
85
+ - Datasets 2.7.1.dev0
86
+ - Tokenizers 0.13.2
fine-tune-whisper-non-streaming-ms.ipynb CHANGED
@@ -1042,7 +1042,7 @@
1042
  },
1043
  {
1044
  "cell_type": "code",
1045
- "execution_count": null,
1046
  "id": "ee8b7b8e-1c9a-4d77-9137-1778a629e6de",
1047
  "metadata": {
1048
  "id": "ee8b7b8e-1c9a-4d77-9137-1778a629e6de",
@@ -1070,8 +1070,8 @@
1070
  "\n",
1071
  " <div>\n",
1072
  " \n",
1073
- " <progress value='1001' max='1000' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
1074
- " [1000/1000 3:47:15, Epoch 10.75/11]\n",
1075
  " </div>\n",
1076
  " <table border=\"1\" class=\"dataframe\">\n",
1077
  " <thead>\n",
@@ -1177,8 +1177,24 @@
1177
  "Configuration saved in ./checkpoint-1000/config.json\n",
1178
  "Model weights saved in ./checkpoint-1000/pytorch_model.bin\n",
1179
  "Feature extractor saved in ./checkpoint-1000/preprocessor_config.json\n",
1180
- "Feature extractor saved in ./preprocessor_config.json\n"
 
 
 
 
 
 
1181
  ]
 
 
 
 
 
 
 
 
 
 
1182
  }
1183
  ],
1184
  "source": [
@@ -1197,7 +1213,7 @@
1197
  },
1198
  {
1199
  "cell_type": "code",
1200
- "execution_count": null,
1201
  "id": "c704f91e-241b-48c9-b8e0-f0da396a9663",
1202
  "metadata": {
1203
  "id": "c704f91e-241b-48c9-b8e0-f0da396a9663"
@@ -1232,7 +1248,48 @@
1232
  "metadata": {
1233
  "id": "d7030622-caf7-4039-939b-6195cdaa2585"
1234
  },
1235
- "outputs": [],
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1236
  "source": [
1237
  "trainer.push_to_hub(**kwargs)"
1238
  ]
 
1042
  },
1043
  {
1044
  "cell_type": "code",
1045
+ "execution_count": 23,
1046
  "id": "ee8b7b8e-1c9a-4d77-9137-1778a629e6de",
1047
  "metadata": {
1048
  "id": "ee8b7b8e-1c9a-4d77-9137-1778a629e6de",
 
1070
  "\n",
1071
  " <div>\n",
1072
  " \n",
1073
+ " <progress value='1000' max='1000' style='width:300px; height:20px; vertical-align: middle;'></progress>\n",
1074
+ " [1000/1000 4:05:08, Epoch 10/11]\n",
1075
  " </div>\n",
1076
  " <table border=\"1\" class=\"dataframe\">\n",
1077
  " <thead>\n",
 
1177
  "Configuration saved in ./checkpoint-1000/config.json\n",
1178
  "Model weights saved in ./checkpoint-1000/pytorch_model.bin\n",
1179
  "Feature extractor saved in ./checkpoint-1000/preprocessor_config.json\n",
1180
+ "Feature extractor saved in ./preprocessor_config.json\n",
1181
+ "\n",
1182
+ "\n",
1183
+ "Training completed. Do not forget to share your model on huggingface.co/models =)\n",
1184
+ "\n",
1185
+ "\n",
1186
+ "Loading best model from ./checkpoint-1000 (score: 9.578362255965294).\n"
1187
  ]
1188
+ },
1189
+ {
1190
+ "data": {
1191
+ "text/plain": [
1192
+ "TrainOutput(global_step=1000, training_loss=0.12478019709698857, metrics={'train_runtime': 14718.8594, 'train_samples_per_second': 2.174, 'train_steps_per_second': 0.068, 'total_flos': 3.26797691387904e+19, 'train_loss': 0.12478019709698857, 'epoch': 10.75})"
1193
+ ]
1194
+ },
1195
+ "execution_count": 23,
1196
+ "metadata": {},
1197
+ "output_type": "execute_result"
1198
  }
1199
  ],
1200
  "source": [
 
1213
  },
1214
  {
1215
  "cell_type": "code",
1216
+ "execution_count": 24,
1217
  "id": "c704f91e-241b-48c9-b8e0-f0da396a9663",
1218
  "metadata": {
1219
  "id": "c704f91e-241b-48c9-b8e0-f0da396a9663"
 
1248
  "metadata": {
1249
  "id": "d7030622-caf7-4039-939b-6195cdaa2585"
1250
  },
1251
+ "outputs": [
1252
+ {
1253
+ "name": "stderr",
1254
+ "output_type": "stream",
1255
+ "text": [
1256
+ "Saving model checkpoint to ./\n",
1257
+ "Configuration saved in ./config.json\n",
1258
+ "Model weights saved in ./pytorch_model.bin\n",
1259
+ "Feature extractor saved in ./preprocessor_config.json\n",
1260
+ "Several commits (2) will be pushed upstream.\n",
1261
+ "The progress bars may be unreliable.\n"
1262
+ ]
1263
+ },
1264
+ {
1265
+ "data": {
1266
+ "application/vnd.jupyter.widget-view+json": {
1267
+ "model_id": "cfdf5c66dd1c4c61b8e49df34cf219bb",
1268
+ "version_major": 2,
1269
+ "version_minor": 0
1270
+ },
1271
+ "text/plain": [
1272
+ "Upload file pytorch_model.bin: 0%| | 32.0k/2.85G [00:00<?, ?B/s]"
1273
+ ]
1274
+ },
1275
+ "metadata": {},
1276
+ "output_type": "display_data"
1277
+ },
1278
+ {
1279
+ "data": {
1280
+ "application/vnd.jupyter.widget-view+json": {
1281
+ "model_id": "d3c38f0c5bd843b7a1f6eb6f30797880",
1282
+ "version_major": 2,
1283
+ "version_minor": 0
1284
+ },
1285
+ "text/plain": [
1286
+ "Upload file runs/Dec20_10-49-53_DANDAN/events.out.tfevents.1671504600.DANDAN.793.0: 100%|##########| 12.4k/12.…"
1287
+ ]
1288
+ },
1289
+ "metadata": {},
1290
+ "output_type": "display_data"
1291
+ }
1292
+ ],
1293
  "source": [
1294
  "trainer.push_to_hub(**kwargs)"
1295
  ]