PontifexMaximus
commited on
Commit
•
c0bdccd
1
Parent(s):
6dc294f
update model card README.md
Browse files
README.md
ADDED
@@ -0,0 +1,103 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: cc-by-nc-sa-4.0
|
3 |
+
tags:
|
4 |
+
- generated_from_trainer
|
5 |
+
datasets:
|
6 |
+
- opus_infopankki
|
7 |
+
metrics:
|
8 |
+
- bleu
|
9 |
+
model-index:
|
10 |
+
- name: mt5-small-parsinlu-opus-translation_fa_en-finetuned-fa-to-en
|
11 |
+
results:
|
12 |
+
- task:
|
13 |
+
name: Sequence-to-sequence Language Modeling
|
14 |
+
type: text2text-generation
|
15 |
+
dataset:
|
16 |
+
name: opus_infopankki
|
17 |
+
type: opus_infopankki
|
18 |
+
args: en-fa
|
19 |
+
metrics:
|
20 |
+
- name: Bleu
|
21 |
+
type: bleu
|
22 |
+
value: 9.5106
|
23 |
+
---
|
24 |
+
|
25 |
+
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
26 |
+
should probably proofread and complete it, then remove this comment. -->
|
27 |
+
|
28 |
+
# mt5-small-parsinlu-opus-translation_fa_en-finetuned-fa-to-en
|
29 |
+
|
30 |
+
This model is a fine-tuned version of [persiannlp/mt5-small-parsinlu-opus-translation_fa_en](https://huggingface.co/persiannlp/mt5-small-parsinlu-opus-translation_fa_en) on the opus_infopankki dataset.
|
31 |
+
It achieves the following results on the evaluation set:
|
32 |
+
- Loss: 2.5449
|
33 |
+
- Bleu: 9.5106
|
34 |
+
- Gen Len: 13.6434
|
35 |
+
|
36 |
+
## Model description
|
37 |
+
|
38 |
+
More information needed
|
39 |
+
|
40 |
+
## Intended uses & limitations
|
41 |
+
|
42 |
+
More information needed
|
43 |
+
|
44 |
+
## Training and evaluation data
|
45 |
+
|
46 |
+
More information needed
|
47 |
+
|
48 |
+
## Training procedure
|
49 |
+
|
50 |
+
### Training hyperparameters
|
51 |
+
|
52 |
+
The following hyperparameters were used during training:
|
53 |
+
- learning_rate: 2e-06
|
54 |
+
- train_batch_size: 32
|
55 |
+
- eval_batch_size: 32
|
56 |
+
- seed: 42
|
57 |
+
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
58 |
+
- lr_scheduler_type: linear
|
59 |
+
- num_epochs: 30
|
60 |
+
- mixed_precision_training: Native AMP
|
61 |
+
|
62 |
+
### Training results
|
63 |
+
|
64 |
+
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|
65 |
+
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
|
66 |
+
| No log | 1.0 | 151 | 3.1656 | 7.194 | 14.1885 |
|
67 |
+
| No log | 2.0 | 302 | 3.0419 | 7.7031 | 14.1005 |
|
68 |
+
| No log | 3.0 | 453 | 2.9549 | 8.1502 | 13.9834 |
|
69 |
+
| 3.5336 | 4.0 | 604 | 2.8857 | 8.4488 | 13.9251 |
|
70 |
+
| 3.5336 | 5.0 | 755 | 2.8297 | 8.6606 | 13.786 |
|
71 |
+
| 3.5336 | 6.0 | 906 | 2.7808 | 8.8217 | 13.7983 |
|
72 |
+
| 3.2511 | 7.0 | 1057 | 2.7386 | 8.9221 | 13.7518 |
|
73 |
+
| 3.2511 | 8.0 | 1208 | 2.7006 | 9.1988 | 13.7159 |
|
74 |
+
| 3.2511 | 9.0 | 1359 | 2.6678 | 9.2751 | 13.676 |
|
75 |
+
| 3.1055 | 10.0 | 1510 | 2.6387 | 9.4142 | 13.6648 |
|
76 |
+
| 3.1055 | 11.0 | 1661 | 2.6154 | 9.5726 | 13.6841 |
|
77 |
+
| 3.1055 | 12.0 | 1812 | 2.5945 | 9.6571 | 13.6546 |
|
78 |
+
| 3.1055 | 13.0 | 1963 | 2.5813 | 9.8303 | 13.6571 |
|
79 |
+
| 3.0199 | 14.0 | 2114 | 2.5709 | 9.6726 | 13.5855 |
|
80 |
+
| 3.0199 | 15.0 | 2265 | 2.5619 | 9.632 | 13.6125 |
|
81 |
+
| 3.0199 | 16.0 | 2416 | 2.5563 | 9.5773 | 13.6256 |
|
82 |
+
| 2.9862 | 17.0 | 2567 | 2.5538 | 9.5425 | 13.6366 |
|
83 |
+
| 2.9862 | 18.0 | 2718 | 2.5515 | 9.5359 | 13.6326 |
|
84 |
+
| 2.9862 | 19.0 | 2869 | 2.5495 | 9.5544 | 13.642 |
|
85 |
+
| 2.9859 | 20.0 | 3020 | 2.5478 | 9.5183 | 13.6374 |
|
86 |
+
| 2.9859 | 21.0 | 3171 | 2.5466 | 9.5387 | 13.632 |
|
87 |
+
| 2.9859 | 22.0 | 3322 | 2.5458 | 9.5183 | 13.6355 |
|
88 |
+
| 2.9859 | 23.0 | 3473 | 2.5451 | 9.5019 | 13.6376 |
|
89 |
+
| 2.9731 | 24.0 | 3624 | 2.5449 | 9.5004 | 13.6405 |
|
90 |
+
| 2.9731 | 25.0 | 3775 | 2.5449 | 9.5106 | 13.6434 |
|
91 |
+
| 2.9731 | 26.0 | 3926 | 2.5449 | 9.5106 | 13.6434 |
|
92 |
+
| 2.9671 | 27.0 | 4077 | 2.5449 | 9.5106 | 13.6434 |
|
93 |
+
| 2.9671 | 28.0 | 4228 | 2.5449 | 9.5106 | 13.6434 |
|
94 |
+
| 2.9671 | 29.0 | 4379 | 2.5449 | 9.5106 | 13.6434 |
|
95 |
+
| 2.97 | 30.0 | 4530 | 2.5449 | 9.5106 | 13.6434 |
|
96 |
+
|
97 |
+
|
98 |
+
### Framework versions
|
99 |
+
|
100 |
+
- Transformers 4.19.2
|
101 |
+
- Pytorch 1.7.1+cu110
|
102 |
+
- Datasets 2.2.2
|
103 |
+
- Tokenizers 0.12.1
|