File size: 1,286 Bytes
d677f70
 
f072a2c
 
 
 
 
d677f70
 
f072a2c
 
d677f70
f072a2c
d677f70
f072a2c
 
 
 
 
 
 
 
 
 
 
d677f70
f072a2c
d677f70
f072a2c
d677f70
f072a2c
d677f70
f072a2c
d677f70
f072a2c
d677f70
f072a2c
d677f70
f072a2c
d677f70
f072a2c
d677f70
f072a2c
 
 
 
 
 
 
 
 
d677f70
f072a2c
d677f70
f072a2c
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
library_name: transformers
tags:
- generated_from_trainer
model-index:
- name: ethipic-sec2sec-tigrinya
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# ethipic-sec2sec-tigrinya

This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.1009
- eval_wer: 0.0416
- eval_cer: 0.0113
- eval_bleu: 91.6015
- eval_runtime: 30.3787
- eval_samples_per_second: 9.842
- eval_steps_per_second: 0.099
- epoch: 4.0
- step: 51000

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP

### Framework versions

- Transformers 4.44.2
- Pytorch 2.5.0+cu121
- Datasets 3.0.2
- Tokenizers 0.19.1