End of training
Browse files
README.md
CHANGED
|
@@ -15,12 +15,12 @@ should probably proofread and complete it, then remove this comment. -->
|
|
| 15 |
This model was trained from scratch on an unknown dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
- eval_loss: 2.2188
|
| 18 |
-
- eval_model_preparation_time: 0.
|
| 19 |
-
- eval_cer: 0.
|
| 20 |
-
- eval_wer: 0.
|
| 21 |
-
- eval_runtime:
|
| 22 |
-
- eval_samples_per_second:
|
| 23 |
-
- eval_steps_per_second: 0.
|
| 24 |
- step: 0
|
| 25 |
|
| 26 |
## Model description
|
|
|
|
| 15 |
This model was trained from scratch on an unknown dataset.
|
| 16 |
It achieves the following results on the evaluation set:
|
| 17 |
- eval_loss: 2.2188
|
| 18 |
+
- eval_model_preparation_time: 0.0044
|
| 19 |
+
- eval_cer: 0.3354
|
| 20 |
+
- eval_wer: 0.4724
|
| 21 |
+
- eval_runtime: 43.1274
|
| 22 |
+
- eval_samples_per_second: 13.263
|
| 23 |
+
- eval_steps_per_second: 0.835
|
| 24 |
- step: 0
|
| 25 |
|
| 26 |
## Model description
|
all_results.json
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
{
|
| 2 |
-
"eval_cer": 0.
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
-
"eval_model_preparation_time": 0.
|
| 5 |
-
"eval_runtime":
|
| 6 |
"eval_samples": 572,
|
| 7 |
-
"eval_samples_per_second":
|
| 8 |
-
"eval_steps_per_second": 0.
|
| 9 |
-
"eval_wer": 0.
|
| 10 |
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"eval_cer": 0.33544957921157537,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
+
"eval_model_preparation_time": 0.0044,
|
| 5 |
+
"eval_runtime": 43.1274,
|
| 6 |
"eval_samples": 572,
|
| 7 |
+
"eval_samples_per_second": 13.263,
|
| 8 |
+
"eval_steps_per_second": 0.835,
|
| 9 |
+
"eval_wer": 0.47242942811174493
|
| 10 |
}
|
eval_results.json
CHANGED
|
@@ -1,10 +1,10 @@
|
|
| 1 |
{
|
| 2 |
-
"eval_cer": 0.
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
-
"eval_model_preparation_time": 0.
|
| 5 |
-
"eval_runtime":
|
| 6 |
"eval_samples": 572,
|
| 7 |
-
"eval_samples_per_second":
|
| 8 |
-
"eval_steps_per_second": 0.
|
| 9 |
-
"eval_wer": 0.
|
| 10 |
}
|
|
|
|
| 1 |
{
|
| 2 |
+
"eval_cer": 0.33544957921157537,
|
| 3 |
"eval_loss": 2.218759059906006,
|
| 4 |
+
"eval_model_preparation_time": 0.0044,
|
| 5 |
+
"eval_runtime": 43.1274,
|
| 6 |
"eval_samples": 572,
|
| 7 |
+
"eval_samples_per_second": 13.263,
|
| 8 |
+
"eval_steps_per_second": 0.835,
|
| 9 |
+
"eval_wer": 0.47242942811174493
|
| 10 |
}
|
training_args.bin
CHANGED
|
@@ -1,3 +1,3 @@
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
-
oid sha256:
|
| 3 |
size 5496
|
|
|
|
| 1 |
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:35cc03612b7f32d98e24dad2241a09f1fb03ab2aef56cadb54287cffb3f8f9c2
|
| 3 |
size 5496
|
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3711144.out
CHANGED
|
@@ -15561,3 +15561,45 @@ Last Prediction string लता द्वारा अनुवादित ह
|
|
| 15561 |
eval_steps_per_second = 0.781
|
| 15562 |
eval_wer = 0.5242
|
| 15563 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 15561 |
eval_steps_per_second = 0.781
|
| 15562 |
eval_wer = 0.5242
|
| 15563 |
|
| 15564 |
+
wandb: - 0.011 MB of 0.011 MB uploaded
|
| 15565 |
+
wandb: Run history:
|
| 15566 |
+
wandb: eval/cer ▁
|
| 15567 |
+
wandb: eval/loss ▁
|
| 15568 |
+
wandb: eval/model_preparation_time ▁
|
| 15569 |
+
wandb: eval/runtime ▁
|
| 15570 |
+
wandb: eval/samples_per_second ▁
|
| 15571 |
+
wandb: eval/steps_per_second ▁
|
| 15572 |
+
wandb: eval/wer ▁
|
| 15573 |
+
wandb: eval_cer ▁
|
| 15574 |
+
wandb: eval_loss ▁
|
| 15575 |
+
wandb: eval_model_preparation_time ▁
|
| 15576 |
+
wandb: eval_runtime ▁
|
| 15577 |
+
wandb: eval_samples ▁
|
| 15578 |
+
wandb: eval_samples_per_second ▁
|
| 15579 |
+
wandb: eval_steps_per_second ▁
|
| 15580 |
+
wandb: eval_wer ▁
|
| 15581 |
+
wandb: train/global_step ▁▁
|
| 15582 |
+
wandb:
|
| 15583 |
+
wandb: Run summary:
|
| 15584 |
+
wandb: eval/cer 0.46299
|
| 15585 |
+
wandb: eval/loss 2.21876
|
| 15586 |
+
wandb: eval/model_preparation_time 0.0052
|
| 15587 |
+
wandb: eval/runtime 46.1107
|
| 15588 |
+
wandb: eval/samples_per_second 12.405
|
| 15589 |
+
wandb: eval/steps_per_second 0.781
|
| 15590 |
+
wandb: eval/wer 0.52421
|
| 15591 |
+
wandb: eval_cer 0.46299
|
| 15592 |
+
wandb: eval_loss 2.21876
|
| 15593 |
+
wandb: eval_model_preparation_time 0.0052
|
| 15594 |
+
wandb: eval_runtime 46.1107
|
| 15595 |
+
wandb: eval_samples 572
|
| 15596 |
+
wandb: eval_samples_per_second 12.405
|
| 15597 |
+
wandb: eval_steps_per_second 0.781
|
| 15598 |
+
wandb: eval_wer 0.52421
|
| 15599 |
+
wandb: train/global_step 0
|
| 15600 |
+
wandb:
|
| 15601 |
+
wandb: 🚀 View run transliterated_wer_glamorous_tree_37 at: https://wandb.ai/priyanshipal/huggingface/runs/6hswnhgq
|
| 15602 |
+
wandb: ⭐️ View project at: https://wandb.ai/priyanshipal/huggingface
|
| 15603 |
+
wandb: Synced 6 W&B file(s), 0 media file(s), 0 artifact file(s) and 0 other file(s)
|
| 15604 |
+
wandb: Find logs at: ./wandb/run-20241015_002010-6hswnhgq/logs
|
| 15605 |
+
wandb: WARNING The new W&B backend becomes opt-out in version 0.18.0; try it out with `wandb.require("core")`! See https://wandb.me/wandb-core for more information.
|
transliteratedevalonlyhindi_indicwav2vec_MUCS_warmup500_s300shuff100_3711777.out
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|