PontifexMaximus commited on
Commit
4245f0a
·
1 Parent(s): 38446bc

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +35 -9
README.md CHANGED
@@ -3,10 +3,23 @@ license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
- - turkic_xwmt
 
 
7
  model-index:
8
  - name: opus-mt-tr-en-finetuned-tr-to-en
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -14,7 +27,11 @@ should probably proofread and complete it, then remove this comment. -->
14
 
15
  # opus-mt-tr-en-finetuned-tr-to-en
16
 
17
- This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tr-en](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en) on the turkic_xwmt dataset.
 
 
 
 
18
 
19
  ## Model description
20
 
@@ -33,18 +50,27 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 0.2
37
- - train_batch_size: 16
38
- - eval_batch_size: 16
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
- - num_epochs: 1
43
  - mixed_precision_training: Native AMP
44
 
 
 
 
 
 
 
 
 
 
45
  ### Framework versions
46
 
47
  - Transformers 4.19.2
48
- - Pytorch 1.11.0+cu113
49
- - Datasets 2.2.1
50
  - Tokenizers 0.12.1
 
3
  tags:
4
  - generated_from_trainer
5
  datasets:
6
+ - opus_infopankki
7
+ metrics:
8
+ - bleu
9
  model-index:
10
  - name: opus-mt-tr-en-finetuned-tr-to-en
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: opus_infopankki
17
+ type: opus_infopankki
18
+ args: en-tr
19
+ metrics:
20
+ - name: Bleu
21
+ type: bleu
22
+ value: 37.5209
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
27
 
28
  # opus-mt-tr-en-finetuned-tr-to-en
29
 
30
+ This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tr-en](https://huggingface.co/Helsinki-NLP/opus-mt-tr-en) on the opus_infopankki dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 1.3456
33
+ - Bleu: 37.5209
34
+ - Gen Len: 13.5457
35
 
36
  ## Model description
37
 
 
50
  ### Training hyperparameters
51
 
52
  The following hyperparameters were used during training:
53
+ - learning_rate: 2e-06
54
+ - train_batch_size: 32
55
+ - eval_batch_size: 32
56
  - seed: 42
57
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
  - lr_scheduler_type: linear
59
+ - num_epochs: 3
60
  - mixed_precision_training: Native AMP
61
 
62
+ ### Training results
63
+
64
+ | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
65
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
66
+ | No log | 1.0 | 138 | 1.4283 | 35.8087 | 13.5806 |
67
+ | No log | 2.0 | 276 | 1.3649 | 36.8833 | 13.5446 |
68
+ | No log | 3.0 | 414 | 1.3456 | 37.5209 | 13.5457 |
69
+
70
+
71
  ### Framework versions
72
 
73
  - Transformers 4.19.2
74
+ - Pytorch 1.7.1+cu110
75
+ - Datasets 2.2.2
76
  - Tokenizers 0.12.1