farleyknight commited on
Commit
f5cfc31
1 Parent(s): 0581309

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +89 -0
README.md ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - big_patent
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: patent-summarization-t5-base-2022-09-20
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: big_patent
17
+ type: big_patent
18
+ config: all
19
+ split: train
20
+ args: all
21
+ metrics:
22
+ - name: Rouge1
23
+ type: rouge
24
+ value: 19.4044
25
+ ---
26
+
27
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
28
+ should probably proofread and complete it, then remove this comment. -->
29
+
30
+ # patent-summarization-t5-base-2022-09-20
31
+
32
+ This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the big_patent dataset.
33
+ It achieves the following results on the evaluation set:
34
+ - Loss: 1.9973
35
+ - Rouge1: 19.4044
36
+ - Rouge2: 7.5483
37
+ - Rougel: 16.2429
38
+ - Rougelsum: 17.488
39
+ - Gen Len: 19.0
40
+
41
+ ## Model description
42
+
43
+ More information needed
44
+
45
+ ## Intended uses & limitations
46
+
47
+ More information needed
48
+
49
+ ## Training and evaluation data
50
+
51
+ More information needed
52
+
53
+ ## Training procedure
54
+
55
+ ### Training hyperparameters
56
+
57
+ The following hyperparameters were used during training:
58
+ - learning_rate: 5e-05
59
+ - train_batch_size: 1
60
+ - eval_batch_size: 1
61
+ - seed: 42
62
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
+ - lr_scheduler_type: linear
64
+ - num_epochs: 1.0
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
69
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
70
+ | 2.2811 | 0.08 | 5000 | 2.1767 | 18.5624 | 6.8795 | 15.5361 | 16.6836 | 19.0 |
71
+ | 2.2551 | 0.17 | 10000 | 2.1327 | 19.077 | 6.8512 | 15.79 | 17.086 | 19.0 |
72
+ | 2.2818 | 0.25 | 15000 | 2.1029 | 18.8637 | 6.9233 | 15.7341 | 16.9717 | 19.0 |
73
+ | 2.1952 | 0.33 | 20000 | 2.0805 | 18.962 | 7.1157 | 15.8297 | 17.0333 | 19.0 |
74
+ | 2.157 | 0.41 | 25000 | 2.0641 | 19.1418 | 7.315 | 16.05 | 17.2551 | 19.0 |
75
+ | 2.1775 | 0.5 | 30000 | 2.0452 | 19.2387 | 7.3193 | 16.0852 | 17.3563 | 19.0 |
76
+ | 2.1376 | 0.58 | 35000 | 2.0308 | 19.291 | 7.363 | 16.1243 | 17.4151 | 19.0 |
77
+ | 2.1853 | 0.66 | 40000 | 2.0207 | 19.2808 | 7.4671 | 16.1593 | 17.3836 | 19.0 |
78
+ | 2.1416 | 0.75 | 45000 | 2.0113 | 19.0414 | 7.3335 | 15.9747 | 17.1899 | 19.0 |
79
+ | 2.1245 | 0.83 | 50000 | 2.0055 | 19.1445 | 7.3715 | 16.0166 | 17.2621 | 19.0 |
80
+ | 2.133 | 0.91 | 55000 | 1.9997 | 19.3033 | 7.4821 | 16.1413 | 17.3949 | 19.0 |
81
+ | 2.1191 | 0.99 | 60000 | 1.9973 | 19.4044 | 7.5483 | 16.2429 | 17.488 | 19.0 |
82
+
83
+
84
+ ### Framework versions
85
+
86
+ - Transformers 4.23.0.dev0
87
+ - Pytorch 1.12.0
88
+ - Datasets 2.4.0
89
+ - Tokenizers 0.12.1