Commit
•
ee2b1ab
1
Parent(s):
488b3c5
Upload TFT5ForConditionalGeneration
Browse files- README.md +11 -43
- generation_config.json +1 -0
- tf_model.h5 +3 -0
README.md
CHANGED
@@ -1,31 +1,19 @@
|
|
1 |
---
|
2 |
-
license: apache-2.0
|
3 |
-
base_model: t5-small
|
4 |
tags:
|
5 |
-
-
|
6 |
-
metrics:
|
7 |
-
- rouge
|
8 |
model-index:
|
9 |
- name: t5-small-MedicoSummarizer
|
10 |
results: []
|
11 |
-
language:
|
12 |
-
- en
|
13 |
---
|
14 |
|
15 |
-
<!-- This model card has been generated automatically according to the information
|
16 |
-
|
17 |
|
18 |
# t5-small-MedicoSummarizer
|
19 |
|
20 |
-
This model
|
21 |
-
It achieves the following results on the evaluation set
|
22 |
-
|
23 |
-
- Loss: 2.8533
|
24 |
-
- Rouge1: 0.3234
|
25 |
-
- Rouge2: 0.0787
|
26 |
-
- Rougel: 0.1967
|
27 |
-
- Rougelsum: 0.1965
|
28 |
-
- Gen Len: 123.98
|
29 |
|
30 |
## Model description
|
31 |
|
@@ -41,39 +29,19 @@ More information needed
|
|
41 |
|
42 |
## Training procedure
|
43 |
|
44 |
-
The inference engine doesn't do justice to its operation as the inference engine API doesn't work good for trainer checkpoints as the context limit is low in default for T5 which you can change while using it on backend of your application ! So, you should rather load it on the pipeline and just try it !
|
45 |
-
|
46 |
### Training hyperparameters
|
47 |
|
48 |
The following hyperparameters were used during training:
|
49 |
-
-
|
50 |
-
-
|
51 |
-
- eval_batch_size: 16
|
52 |
-
- seed: 42
|
53 |
-
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
54 |
-
- lr_scheduler_type: linear
|
55 |
-
- num_epochs: 10
|
56 |
-
- mixed_precision_training: Native AMP
|
57 |
|
58 |
### Training results
|
59 |
|
60 |
-
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|
61 |
-
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
|
62 |
-
| 3.2353 | 1.0 | 1563 | 2.9967 | 0.3034 | 0.0717 | 0.1837 | 0.1836 | 117.308 |
|
63 |
-
| 3.1623 | 2.0 | 3126 | 2.9421 | 0.3178 | 0.0763 | 0.1941 | 0.1941 | 121.529 |
|
64 |
-
| 3.1149 | 3.0 | 4689 | 2.9152 | 0.3223 | 0.078 | 0.1964 | 0.1964 | 123.223 |
|
65 |
-
| 3.1038 | 4.0 | 6252 | 2.8929 | 0.3245 | 0.0793 | 0.1979 | 0.1978 | 123.491 |
|
66 |
-
| 3.0728 | 5.0 | 7815 | 2.8802 | 0.3227 | 0.0777 | 0.1973 | 0.1972 | 123.6 |
|
67 |
-
| 3.0592 | 6.0 | 9378 | 2.8714 | 0.3213 | 0.0788 | 0.1966 | 0.1965 | 123.604 |
|
68 |
-
| 3.0448 | 7.0 | 10941 | 2.8635 | 0.3211 | 0.0776 | 0.1959 | 0.1957 | 123.632 |
|
69 |
-
| 3.0416 | 8.0 | 12504 | 2.8561 | 0.3204 | 0.0777 | 0.1957 | 0.1955 | 123.851 |
|
70 |
-
| 3.0324 | 9.0 | 14067 | 2.8548 | 0.3237 | 0.0788 | 0.1965 | 0.1963 | 123.934 |
|
71 |
-
| 3.0375 | 10.0 | 15630 | 2.8533 | 0.3234 | 0.0787 | 0.1967 | 0.1965 | 123.98 |
|
72 |
|
73 |
|
74 |
### Framework versions
|
75 |
|
76 |
- Transformers 4.35.2
|
77 |
-
-
|
78 |
-
- Datasets 2.
|
79 |
-
- Tokenizers 0.15.0
|
|
|
1 |
---
|
|
|
|
|
2 |
tags:
|
3 |
+
- generated_from_keras_callback
|
|
|
|
|
4 |
model-index:
|
5 |
- name: t5-small-MedicoSummarizer
|
6 |
results: []
|
|
|
|
|
7 |
---
|
8 |
|
9 |
+
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
10 |
+
probably proofread and complete it, then remove this comment. -->
|
11 |
|
12 |
# t5-small-MedicoSummarizer
|
13 |
|
14 |
+
This model was trained from scratch on an unknown dataset.
|
15 |
+
It achieves the following results on the evaluation set:
|
16 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
## Model description
|
19 |
|
|
|
29 |
|
30 |
## Training procedure
|
31 |
|
|
|
|
|
32 |
### Training hyperparameters
|
33 |
|
34 |
The following hyperparameters were used during training:
|
35 |
+
- optimizer: None
|
36 |
+
- training_precision: float32
|
|
|
|
|
|
|
|
|
|
|
|
|
37 |
|
38 |
### Training results
|
39 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
40 |
|
41 |
|
42 |
### Framework versions
|
43 |
|
44 |
- Transformers 4.35.2
|
45 |
+
- TensorFlow 2.15.0
|
46 |
+
- Datasets 2.16.1
|
47 |
+
- Tokenizers 0.15.0
|
generation_config.json
CHANGED
@@ -1,4 +1,5 @@
|
|
1 |
{
|
|
|
2 |
"decoder_start_token_id": 0,
|
3 |
"eos_token_id": 1,
|
4 |
"pad_token_id": 0,
|
|
|
1 |
{
|
2 |
+
"_from_model_config": true,
|
3 |
"decoder_start_token_id": 0,
|
4 |
"eos_token_id": 1,
|
5 |
"pad_token_id": 0,
|
tf_model.h5
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b7a96117a913213a3de1eb94c8a1f613f60daa1904924329d9824e8949a8a722
|
3 |
+
size 373902664
|