furyhawk commited on
Commit
9dfd6be
1 Parent(s): 2b4559c

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -6
README.md CHANGED
@@ -2,6 +2,8 @@
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: t5-small-finetuned-bbc
7
  results: []
@@ -13,6 +15,13 @@ should probably proofread and complete it, then remove this comment. -->
13
  # t5-small-finetuned-bbc
14
 
15
  This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
 
 
 
 
 
 
 
16
 
17
  ## Model description
18
 
@@ -32,23 +41,24 @@ More information needed
32
 
33
  The following hyperparameters were used during training:
34
  - learning_rate: 2e-05
35
- - train_batch_size: 12
36
- - eval_batch_size: 12
37
  - seed: 42
38
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
39
  - lr_scheduler_type: linear
40
  - num_epochs: 1
 
41
 
42
  ### Training results
43
 
44
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
45
  |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
46
- | No log | 1.0 | 167 | 0.6722 | 19.4359 | 13.8329 | 17.9079 | 18.091 | 19.0 |
47
 
48
 
49
  ### Framework versions
50
 
51
- - Transformers 4.11.3
52
- - Pytorch 1.9.1
53
- - Datasets 1.12.1
54
  - Tokenizers 0.10.3
 
2
  license: apache-2.0
3
  tags:
4
  - generated_from_trainer
5
+ metrics:
6
+ - rouge
7
  model-index:
8
  - name: t5-small-finetuned-bbc
9
  results: []
 
15
  # t5-small-finetuned-bbc
16
 
17
  This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.3238
20
+ - Rouge1: 21.2266
21
+ - Rouge2: 16.0927
22
+ - Rougel: 19.6785
23
+ - Rougelsum: 19.8849
24
+ - Gen Len: 19.0
25
 
26
  ## Model description
27
 
 
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 2e-05
44
+ - train_batch_size: 2
45
+ - eval_batch_size: 2
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
  - num_epochs: 1
50
+ - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
55
  |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
56
+ | 0.4882 | 1.0 | 1001 | 0.3238 | 21.2266 | 16.0927 | 19.6785 | 19.8849 | 19.0 |
57
 
58
 
59
  ### Framework versions
60
 
61
+ - Transformers 4.12.0
62
+ - Pytorch 1.10.0
63
+ - Datasets 1.14.0
64
  - Tokenizers 0.10.3