rossanez commited on
Commit
47dbe88
1 Parent(s): ca17083

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -4
README.md CHANGED
@@ -4,9 +4,22 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - wmt14
 
 
7
  model-index:
8
  - name: t5-small-finetuned-de-en-nofp16
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,6 +28,10 @@ should probably proofread and complete it, then remove this comment. -->
15
  # t5-small-finetuned-de-en-nofp16
16
 
17
  This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
 
 
 
 
18
 
19
  ## Model description
20
 
@@ -33,19 +50,23 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 2e-05
37
  - train_batch_size: 16
38
  - eval_batch_size: 16
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
- - num_epochs: 1
43
 
44
  ### Training results
45
 
46
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
47
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
48
- | No log | 1.0 | 188 | 2.1221 | 7.7311 | 17.4159 |
 
 
 
 
49
 
50
 
51
  ### Framework versions
4
  - generated_from_trainer
5
  datasets:
6
  - wmt14
7
+ metrics:
8
+ - bleu
9
  model-index:
10
  - name: t5-small-finetuned-de-en-nofp16
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: wmt14
17
+ type: wmt14
18
+ args: de-en
19
+ metrics:
20
+ - name: Bleu
21
+ type: bleu
22
+ value: 9.5801
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  # t5-small-finetuned-de-en-nofp16
29
 
30
  This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 2.1460
33
+ - Bleu: 9.5801
34
+ - Gen Len: 17.333
35
 
36
  ## Model description
37
 
50
  ### Training hyperparameters
51
 
52
  The following hyperparameters were used during training:
53
+ - learning_rate: 0.0002
54
  - train_batch_size: 16
55
  - eval_batch_size: 16
56
  - seed: 42
57
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
  - lr_scheduler_type: linear
59
+ - num_epochs: 5
60
 
61
  ### Training results
62
 
63
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
64
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
65
+ | No log | 1.0 | 188 | 2.1899 | 9.4821 | 17.312 |
66
+ | No log | 2.0 | 376 | 2.1986 | 9.5705 | 17.3853 |
67
+ | 1.2118 | 3.0 | 564 | 2.1933 | 9.448 | 17.3293 |
68
+ | 1.2118 | 4.0 | 752 | 2.1607 | 9.563 | 17.336 |
69
+ | 1.2118 | 5.0 | 940 | 2.1460 | 9.5801 | 17.333 |
70
 
71
 
72
  ### Framework versions