File size: 2,794 Bytes
ebd1e8f
e7ddb1b
 
 
5587dc0
e7ddb1b
 
 
ebd1e8f
e7ddb1b
 
 
 
 
 
5587dc0
e7ddb1b
5587dc0
 
 
 
 
 
 
e7ddb1b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5587dc0
 
e7ddb1b
 
 
 
 
 
 
5587dc0
 
 
 
 
 
 
e7ddb1b
 
 
 
5587dc0
e7ddb1b
 
5587dc0
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
---
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: morbius
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# morbius

This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3311
- Bleu: 0.0490
- Precisions: [0.12658339197748064, 0.058000714881448825, 0.031020853918560506, 0.0276665140764477]
- Brevity Penalty: 0.9781
- Length Ratio: 0.9783
- Translation Length: 45472
- Reference Length: 46479

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5

### Training results

| Training Loss | Epoch | Step  | Validation Loss | Bleu   | Precisions                                                                             | Brevity Penalty | Length Ratio | Translation Length | Reference Length |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------------------------------------------------------------------------------------:|:---------------:|:------------:|:------------------:|:----------------:|
| 2.6085        | 1.0   | 2630  | 2.3793          | 0.0398 | [0.11484440108136675, 0.05086452177719413, 0.022402389588222743, 0.019262093750807972] | 1.0             | 1.0585       | 49197              | 46479            |
| 2.5537        | 2.0   | 5260  | 2.3538          | 0.0451 | [0.12435074854873206, 0.053338059789672695, 0.02736549165120594, 0.024163621427155037] | 0.9858          | 0.9859       | 45822              | 46479            |
| 2.427         | 3.0   | 7890  | 2.3412          | 0.0478 | [0.12566410537870473, 0.05610922151130985, 0.029971974257836827, 0.026891236083357122] | 0.9798          | 0.9800       | 45550              | 46479            |
| 2.3716        | 4.0   | 10520 | 2.3347          | 0.0487 | [0.12663965838169275, 0.0574505431946487, 0.030477866031926728, 0.027230821761893922]  | 0.9823          | 0.9825       | 45665              | 46479            |
| 2.3494        | 5.0   | 13150 | 2.3311          | 0.0490 | [0.12658339197748064, 0.058000714881448825, 0.031020853918560506, 0.0276665140764477]  | 0.9781          | 0.9783       | 45472              | 46479            |


### Framework versions

- Transformers 4.34.0
- Pytorch 2.0.1+cu118
- Datasets 2.14.5
- Tokenizers 0.14.0