GlycerinLOL commited on
Commit
b7e6ae0
1 Parent(s): 2fbc1cd

Model save

Browse files
README.md CHANGED
@@ -19,15 +19,15 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
- - Loss: 1.4469
23
- - Rouge1: 0.4939
24
- - Rouge2: 0.2453
25
- - Rougel: 0.4133
26
- - Rougelsum: 0.4134
27
- - Gen Len: 25.9629
28
- - Precision: 0.9133
29
- - Recall: 0.9138
30
- - F1: 0.9134
31
 
32
  ## Model description
33
 
@@ -54,7 +54,7 @@ The following hyperparameters were used during training:
54
  - total_train_batch_size: 96
55
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
  - lr_scheduler_type: linear
57
- - num_epochs: 16
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
@@ -66,16 +66,20 @@ The following hyperparameters were used during training:
66
  | 1.626 | 4.0 | 2776 | 0.911 | 26.7596 | 1.5171 | 0.9102 | 0.9121 | 0.4834 | 0.2345 | 0.402 | 0.402 |
67
  | 1.5918 | 5.0 | 3471 | 0.9112 | 26.6476 | 1.5001 | 0.9106 | 0.9122 | 0.4853 | 0.2365 | 0.4045 | 0.4045 |
68
  | 1.5586 | 6.0 | 4164 | 0.9116 | 26.7778 | 1.4880 | 0.9108 | 0.9127 | 0.4875 | 0.2373 | 0.4063 | 0.4063 |
69
- | 1.5375 | 7.0 | 4858 | 1.4768 | 0.4898 | 0.24 | 0.4083 | 0.4083 | 26.3991| 0.9116 | 0.9128 | 0.912 |
70
- | 1.5146 | 8.0 | 5553 | 1.4686 | 0.4907 | 0.241 | 0.4088 | 0.4089 | 26.156 | 0.9123 | 0.9133 | 0.9126 |
71
- | 1.5006 | 9.0 | 6247 | 1.4636 | 0.4914 | 0.2419 | 0.4097 | 0.4099 | 26.2629| 0.9122 | 0.9135 | 0.9127 |
72
- | 1.49 | 10.0 | 6942 | 1.4580 | 0.4911 | 0.2429 | 0.4109 | 0.411 | 26.0273| 0.9125 | 0.9133 | 0.9127 |
73
- | 1.4749 | 11.0 | 7636 | 1.4546 | 0.4932 | 0.244 | 0.4121 | 0.4123 | 26.2304| 0.9127 | 0.9138 | 0.9131 |
74
- | 1.4661 | 12.0 | 8331 | 1.4514 | 0.4937 | 0.2448 | 0.4126 | 0.4127 | 25.8778| 0.9133 | 0.9136 | 0.9132 |
75
- | 1.4575 | 13.0 | 9025 | 1.4499 | 0.4947 | 0.2453 | 0.4139 | 0.414 | 26.1151| 0.913 | 0.914 | 0.9133 |
76
- | 1.4511 | 14.0 | 9720 | 1.4478 | 0.4939 | 0.2451 | 0.4133 | 0.4134 | 26.0287| 0.9131 | 0.9138 | 0.9133 |
77
- | 1.4519 | 15.0 | 10414 | 1.4471 | 0.4938 | 0.2451 | 0.4134 | 0.4134 | 25.9078| 0.9132 | 0.9137 | 0.9133 |
78
- | 1.4439 | 15.99 | 11104 | 1.4469 | 0.4939 | 0.2453 | 0.4133 | 0.4134 | 25.9629| 0.9133 | 0.9138 | 0.9134 |
 
 
 
 
79
 
80
 
81
  ### Framework versions
 
19
 
20
  This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.4433
23
+ - Rouge1: 0.4961
24
+ - Rouge2: 0.2476
25
+ - Rougel: 0.4155
26
+ - Rougelsum: 0.4154
27
+ - Gen Len: 25.8629
28
+ - Precision: 0.9136
29
+ - Recall: 0.914
30
+ - F1: 0.9137
31
 
32
  ## Model description
33
 
 
54
  - total_train_batch_size: 96
55
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
  - lr_scheduler_type: linear
57
+ - num_epochs: 20
58
  - mixed_precision_training: Native AMP
59
 
60
  ### Training results
 
66
  | 1.626 | 4.0 | 2776 | 0.911 | 26.7596 | 1.5171 | 0.9102 | 0.9121 | 0.4834 | 0.2345 | 0.402 | 0.402 |
67
  | 1.5918 | 5.0 | 3471 | 0.9112 | 26.6476 | 1.5001 | 0.9106 | 0.9122 | 0.4853 | 0.2365 | 0.4045 | 0.4045 |
68
  | 1.5586 | 6.0 | 4164 | 0.9116 | 26.7778 | 1.4880 | 0.9108 | 0.9127 | 0.4875 | 0.2373 | 0.4063 | 0.4063 |
69
+ | 1.5375 | 7.0 | 4858 | 0.912 | 26.3991 | 1.4768 | 0.9116 | 0.9128 | 0.4898 | 0.24 | 0.4083 | 0.4083 |
70
+ | 1.5146 | 8.0 | 5553 | 0.9126 | 26.156 | 1.4686 | 0.9123 | 0.9133 | 0.4907 | 0.241 | 0.4088 | 0.4089 |
71
+ | 1.5006 | 9.0 | 6247 | 0.9127 | 26.2629 | 1.4636 | 0.9122 | 0.9135 | 0.4914 | 0.2419 | 0.4097 | 0.4099 |
72
+ | 1.49 | 10.0 | 6942 | 0.9127 | 26.0273 | 1.4580 | 0.9125 | 0.9133 | 0.4911 | 0.2429 | 0.4109 | 0.411 |
73
+ | 1.4749 | 11.0 | 7636 | 0.9131 | 26.2304 | 1.4546 | 0.9127 | 0.9138 | 0.4932 | 0.244 | 0.4121 | 0.4123 |
74
+ | 1.4661 | 12.0 | 8331 | 0.9132 | 25.8778 | 1.4514 | 0.9133 | 0.9136 | 0.4937 | 0.2448 | 0.4126 | 0.4127 |
75
+ | 1.4575 | 13.0 | 9025 | 0.9133 | 26.1151 | 1.4499 | 0.913 | 0.914 | 0.4947 | 0.2453 | 0.4139 | 0.414 |
76
+ | 1.4511 | 14.0 | 9720 | 0.9133 | 26.0287 | 1.4478 | 0.9131 | 0.9138 | 0.4939 | 0.2451 | 0.4133 | 0.4134 |
77
+ | 1.4519 | 15.0 | 10414 | 0.9133 | 25.9078 | 1.4471 | 0.9132 | 0.9137 | 0.4938 | 0.2451 | 0.4134 | 0.4134 |
78
+ | 1.4439 | 16.0 | 11104 | 1.4474 | 0.4942 | 0.2456 | 0.4133 | 0.4134 | 26.0345| 0.9131 | 0.9139 | 0.9133 |
79
+ | 1.4441 | 17.0 | 11799 | 1.4447 | 0.4945 | 0.2457 | 0.4139 | 0.414 | 25.9391| 0.9133 | 0.9138 | 0.9134 |
80
+ | 1.444 | 18.0 | 12493 | 1.4446 | 0.4957 | 0.2473 | 0.415 | 0.4151 | 26.0107| 0.9133 | 0.9141 | 0.9135 |
81
+ | 1.4375 | 19.0 | 13188 | 1.4433 | 0.4961 | 0.2473 | 0.4153 | 0.4153 | 25.8869| 0.9136 | 0.914 | 0.9136 |
82
+ | 1.4361 | 20.0 | 13880 | 1.4433 | 0.4961 | 0.2476 | 0.4155 | 0.4154 | 25.8629| 0.9136 | 0.914 | 0.9137 |
83
 
84
 
85
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a9adde0c55d4fecdde1c8d9709bc0dfa75253617e6e18c477d15410cdaf0be0e
3
  size 2283652852
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5eb0e2acf11f740f40f4930231b80f75ac922c12df6cfa94a981431125242447
3
  size 2283652852
runs/Mar15_23-22-32_d9n20yctr1710463501031-gcv7j/events.out.tfevents.1710516166.d9n20yctr1710463501031-gcv7j.15088.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2c899ef56aaf809ea819e1853a05738c6ae4e57900ea22ae2ff7b59215fd5813
3
- size 10463
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a07e28a3d18fe552e698ddf92518695797182ee81dee7fc8a53b2962fef87f34
3
+ size 11491