Samuael commited on
Commit
aa30863
1 Parent(s): ffb365a

End of training

Browse files
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  tags:
3
  - generated_from_trainer
4
  model-index:
@@ -11,17 +12,17 @@ should probably proofread and complete it, then remove this comment. -->
11
 
12
  # geez_t5-15k
13
 
14
- This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
15
  It achieves the following results on the evaluation set:
16
- - eval_loss: 0.3052
17
- - eval_wer: 0.0782
18
- - eval_cer: 0.0198
19
- - eval_bleu: 84.3344
20
- - eval_runtime: 61.9531
21
- - eval_samples_per_second: 13.074
22
- - eval_steps_per_second: 0.113
23
- - epoch: 10.0
24
- - step: 860
25
 
26
  ## Model description
27
 
@@ -40,13 +41,14 @@ More information needed
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
- - learning_rate: 0.0005
44
- - train_batch_size: 128
45
- - eval_batch_size: 128
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - num_epochs: 50
 
50
 
51
  ### Framework versions
52
 
 
1
  ---
2
+ base_model: Samuael/geez_t5-15k
3
  tags:
4
  - generated_from_trainer
5
  model-index:
 
12
 
13
  # geez_t5-15k
14
 
15
+ This model is a fine-tuned version of [Samuael/geez_t5-15k](https://huggingface.co/Samuael/geez_t5-15k) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
+ - eval_loss: 0.8705
18
+ - eval_wer: 0.2496
19
+ - eval_cer: 0.0611
20
+ - eval_bleu: 58.7703
21
+ - eval_runtime: 36.6498
22
+ - eval_samples_per_second: 9.795
23
+ - eval_steps_per_second: 0.164
24
+ - epoch: 1.0
25
+ - step: 1421
26
 
27
  ## Model description
28
 
 
41
  ### Training hyperparameters
42
 
43
  The following hyperparameters were used during training:
44
+ - learning_rate: 0.0001
45
+ - train_batch_size: 64
46
+ - eval_batch_size: 64
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
  - num_epochs: 50
51
+ - mixed_precision_training: Native AMP
52
 
53
  ### Framework versions
54
 
config.json CHANGED
@@ -1,4 +1,5 @@
1
  {
 
2
  "architectures": [
3
  "T5ForConditionalGeneration"
4
  ],
 
1
  {
2
+ "_name_or_path": "Samuael/geez_t5-15k",
3
  "architectures": [
4
  "T5ForConditionalGeneration"
5
  ],
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:09d90722355d6690d38bef8f0d267533113fcf6f2d6553b7362b2745ab2e7ebc
3
  size 240738368
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e5153657479492bd9f4ad78a0ea5d299ea508dfd0a058f147f06ebc9d16f2ae3
3
  size 240738368
runs/Mar21_22-40-47_dd8c1793ed55/events.out.tfevents.1711060910.dd8c1793ed55.713.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:31b10472d9f79cd68bcb67e05d783ccd45a92cb2f82aa6cd1002609eff3ba3b4
3
+ size 439580
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0c6c9b06888bcf19f53068e90a0325bfee725a3fb1e522e97b92b2c5a1f3aa21
3
  size 5048
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:84ac4868df4d415a0d6810d88a336ef271d77dcb1a7febf57ebab955904f39a2
3
  size 5048