Ransaka commited on
Commit
3091c14
·
1 Parent(s): 7f1fde6

End of training

Browse files
README.md CHANGED
@@ -14,13 +14,8 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  This model is a fine-tuned version of [Ransaka/sinhala-ocr-model](https://huggingface.co/Ransaka/sinhala-ocr-model) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
- - eval_loss: 5.0265
18
- - eval_cer: 0.4829
19
- - eval_runtime: 219.5849
20
- - eval_samples_per_second: 1.858
21
- - eval_steps_per_second: 0.465
22
- - epoch: 3.92
23
- - step: 300
24
 
25
  ## Model description
26
 
@@ -39,17 +34,35 @@ More information needed
39
  ### Training hyperparameters
40
 
41
  The following hyperparameters were used during training:
42
- - learning_rate: 3e-05
43
  - train_batch_size: 4
44
  - eval_batch_size: 4
45
  - seed: 42
46
- - gradient_accumulation_steps: 4
47
- - total_train_batch_size: 16
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
- - training_steps: 5000
51
  - mixed_precision_training: Native AMP
52
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
53
  ### Framework versions
54
 
55
  - Transformers 4.35.2
 
14
 
15
  This model is a fine-tuned version of [Ransaka/sinhala-ocr-model](https://huggingface.co/Ransaka/sinhala-ocr-model) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - Loss: 6.2306
18
+ - Cer: 0.5161
 
 
 
 
 
19
 
20
  ## Model description
21
 
 
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
37
+ - learning_rate: 8e-05
38
  - train_batch_size: 4
39
  - eval_batch_size: 4
40
  - seed: 42
41
+ - gradient_accumulation_steps: 2
42
+ - total_train_batch_size: 8
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
+ - training_steps: 6000
46
  - mixed_precision_training: Native AMP
47
 
48
+ ### Training results
49
+
50
+ | Training Loss | Epoch | Step | Validation Loss | Cer |
51
+ |:-------------:|:-----:|:----:|:---------------:|:------:|
52
+ | 4.543 | 3.27 | 500 | 6.2682 | 0.7086 |
53
+ | 2.6146 | 6.54 | 1000 | 5.8348 | 0.6390 |
54
+ | 1.8448 | 9.8 | 1500 | 5.8076 | 0.6166 |
55
+ | 1.3887 | 13.07 | 2000 | 6.0250 | 0.6072 |
56
+ | 1.0271 | 16.34 | 2500 | 5.9971 | 0.5707 |
57
+ | 0.8891 | 19.61 | 3000 | 5.9803 | 0.5630 |
58
+ | 0.6548 | 22.88 | 3500 | 6.0045 | 0.5542 |
59
+ | 0.4939 | 26.14 | 4000 | 6.0223 | 0.5354 |
60
+ | 0.322 | 29.41 | 4500 | 6.1360 | 0.5233 |
61
+ | 0.2459 | 32.68 | 5000 | 6.1166 | 0.5220 |
62
+ | 0.123 | 35.95 | 5500 | 6.1740 | 0.5162 |
63
+ | 0.1575 | 39.22 | 6000 | 6.2306 | 0.5161 |
64
+
65
+
66
  ### Framework versions
67
 
68
  - Transformers 4.35.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2133d5acdea916a9fb46a0fe510577da1dc83d74e313568753a36847fe20ef96
3
  size 1260933520
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05c49eeec5efb13e113fe9cfaf69d9afed4e9ea06a308e5296ae5f63e0f006f0
3
  size 1260933520
runs/Jan04_18-53-53_659516cc215a/events.out.tfevents.1704394438.659516cc215a.26.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:16ae246a8211f6d86b05b59f804dfeecfbdc346a70ce429629b315e0487078bc
3
- size 49969
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5881bfa172994f38dd639d4ce515df4aa55d0130d272d7e2ced6b88a56c8bdf0
3
+ size 50323