rossanez commited on
Commit
334a9d8
1 Parent(s): 764947d

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -4
README.md CHANGED
@@ -4,9 +4,22 @@ tags:
4
  - generated_from_trainer
5
  datasets:
6
  - wmt14
 
 
7
  model-index:
8
  - name: t5-small-finetuned-de-en-wd-01
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -15,6 +28,10 @@ should probably proofread and complete it, then remove this comment. -->
15
  # t5-small-finetuned-de-en-wd-01
16
 
17
  This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
 
 
 
 
18
 
19
  ## Model description
20
 
@@ -33,20 +50,24 @@ More information needed
33
  ### Training hyperparameters
34
 
35
  The following hyperparameters were used during training:
36
- - learning_rate: 2e-05
37
  - train_batch_size: 16
38
  - eval_batch_size: 16
39
  - seed: 42
40
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
41
  - lr_scheduler_type: linear
42
- - num_epochs: 1
43
  - mixed_precision_training: Native AMP
44
 
45
  ### Training results
46
 
47
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
48
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
49
- | No log | 1.0 | 188 | 2.1197 | 7.6864 | 17.4416 |
 
 
 
 
50
 
51
 
52
  ### Framework versions
4
  - generated_from_trainer
5
  datasets:
6
  - wmt14
7
+ metrics:
8
+ - bleu
9
  model-index:
10
  - name: t5-small-finetuned-de-en-wd-01
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: wmt14
17
+ type: wmt14
18
+ args: de-en
19
+ metrics:
20
+ - name: Bleu
21
+ type: bleu
22
+ value: 9.6027
23
  ---
24
 
25
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
  # t5-small-finetuned-de-en-wd-01
29
 
30
  This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt14 dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 2.0482
33
+ - Bleu: 9.6027
34
+ - Gen Len: 17.3776
35
 
36
  ## Model description
37
 
50
  ### Training hyperparameters
51
 
52
  The following hyperparameters were used during training:
53
+ - learning_rate: 0.0002
54
  - train_batch_size: 16
55
  - eval_batch_size: 16
56
  - seed: 42
57
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
58
  - lr_scheduler_type: linear
59
+ - num_epochs: 5
60
  - mixed_precision_training: Native AMP
61
 
62
  ### Training results
63
 
64
  | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
65
  |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|
66
+ | No log | 1.0 | 188 | 2.0502 | 9.3675 | 17.3983 |
67
+ | No log | 2.0 | 376 | 2.0590 | 9.4393 | 17.3869 |
68
+ | 1.6509 | 3.0 | 564 | 2.0639 | 9.3886 | 17.3806 |
69
+ | 1.6509 | 4.0 | 752 | 2.0498 | 9.5802 | 17.3846 |
70
+ | 1.6509 | 5.0 | 940 | 2.0482 | 9.6027 | 17.3776 |
71
 
72
 
73
  ### Framework versions