update model card README.md
Browse files
README.md
CHANGED
@@ -2,8 +2,6 @@
|
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
- generated_from_trainer
|
5 |
-
metrics:
|
6 |
-
- rouge
|
7 |
model-index:
|
8 |
- name: t5-small-finetuned-pubmed
|
9 |
results: []
|
@@ -16,12 +14,15 @@ should probably proofread and complete it, then remove this comment. -->
|
|
16 |
|
17 |
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
|
18 |
It achieves the following results on the evaluation set:
|
19 |
-
-
|
20 |
-
-
|
21 |
-
-
|
22 |
-
-
|
23 |
-
-
|
24 |
-
-
|
|
|
|
|
|
|
25 |
|
26 |
## Model description
|
27 |
|
@@ -46,20 +47,9 @@ The following hyperparameters were used during training:
|
|
46 |
- seed: 42
|
47 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
48 |
- lr_scheduler_type: linear
|
49 |
-
- num_epochs:
|
50 |
- mixed_precision_training: Native AMP
|
51 |
|
52 |
-
### Training results
|
53 |
-
|
54 |
-
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|
55 |
-
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
|
56 |
-
| No log | 1.0 | 100 | 2.1324 | 29.4167 | 13.5345 | 25.6588 | 25.8099 | 17.8596 |
|
57 |
-
| No log | 2.0 | 200 | 2.0319 | 34.0176 | 16.285 | 29.3676 | 29.5428 | 17.1966 |
|
58 |
-
| No log | 3.0 | 300 | 1.9969 | 35.0555 | 17.1712 | 30.7931 | 30.9756 | 16.8989 |
|
59 |
-
| No log | 4.0 | 400 | 1.9802 | 35.997 | 17.979 | 31.8043 | 32.1127 | 16.8539 |
|
60 |
-
| 2.1897 | 5.0 | 500 | 1.9754 | 36.7213 | 18.6627 | 32.3932 | 32.6819 | 16.9326 |
|
61 |
-
|
62 |
-
|
63 |
### Framework versions
|
64 |
|
65 |
- Transformers 4.12.2
|
|
|
2 |
license: apache-2.0
|
3 |
tags:
|
4 |
- generated_from_trainer
|
|
|
|
|
5 |
model-index:
|
6 |
- name: t5-small-finetuned-pubmed
|
7 |
results: []
|
|
|
14 |
|
15 |
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
|
16 |
It achieves the following results on the evaluation set:
|
17 |
+
- eval_loss: 1.8403
|
18 |
+
- eval_rouge2_precision: 0.298
|
19 |
+
- eval_rouge2_recall: 0.1943
|
20 |
+
- eval_rouge2_fmeasure: 0.2198
|
21 |
+
- eval_runtime: 4.1041
|
22 |
+
- eval_samples_per_second: 43.372
|
23 |
+
- eval_steps_per_second: 2.924
|
24 |
+
- epoch: 5.0
|
25 |
+
- step: 500
|
26 |
|
27 |
## Model description
|
28 |
|
|
|
47 |
- seed: 42
|
48 |
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
49 |
- lr_scheduler_type: linear
|
50 |
+
- num_epochs: 15
|
51 |
- mixed_precision_training: Native AMP
|
52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
53 |
### Framework versions
|
54 |
|
55 |
- Transformers 4.12.2
|