domenicrosati commited on
Commit
cd2401c
1 Parent(s): 2deb9be

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +75 -0
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - translation
5
+ - generated_from_trainer
6
+ datasets:
7
+ - scielo
8
+ metrics:
9
+ - bleu
10
+ model-index:
11
+ - name: opus-mt-es-en-scielo
12
+ results:
13
+ - task:
14
+ name: Sequence-to-sequence Language Modeling
15
+ type: text2text-generation
16
+ dataset:
17
+ name: scielo
18
+ type: scielo
19
+ args: en-es
20
+ metrics:
21
+ - name: Bleu
22
+ type: bleu
23
+ value: 40.87878888820179
24
+ ---
25
+
26
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
27
+ should probably proofread and complete it, then remove this comment. -->
28
+
29
+ # opus-mt-es-en-scielo
30
+
31
+ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-es-en](https://huggingface.co/Helsinki-NLP/opus-mt-es-en) on the scielo dataset.
32
+ It achieves the following results on the evaluation set:
33
+ - Loss: 1.2593
34
+ - Bleu: 40.8788
35
+
36
+ ## Model description
37
+
38
+ More information needed
39
+
40
+ ## Intended uses & limitations
41
+
42
+ More information needed
43
+
44
+ ## Training and evaluation data
45
+
46
+ More information needed
47
+
48
+ ## Training procedure
49
+
50
+ ### Training hyperparameters
51
+
52
+ The following hyperparameters were used during training:
53
+ - learning_rate: 5.6e-05
54
+ - train_batch_size: 16
55
+ - eval_batch_size: 16
56
+ - seed: 42
57
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
+ - lr_scheduler_type: linear
59
+ - num_epochs: 2
60
+ - mixed_precision_training: Native AMP
61
+
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Bleu |
65
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|
66
+ | 1.4277 | 1.0 | 10001 | 1.3473 | 40.5849 |
67
+ | 1.2007 | 2.0 | 20002 | 1.3146 | 41.3308 |
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - Transformers 4.18.0
73
+ - Pytorch 1.11.0
74
+ - Datasets 2.1.0
75
+ - Tokenizers 0.12.1