eschorn commited on
Commit
3791402
1 Parent(s): fd4675e

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +31 -33
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  license: apache-2.0
3
- base_model: google/flan-t5-small
4
  tags:
5
  - generated_from_trainer
6
  datasets:
@@ -10,7 +10,6 @@ metrics:
10
  model-index:
11
  - name: 3_loa
12
  results: []
13
- library_name: peft
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -18,13 +17,13 @@ should probably proofread and complete it, then remove this comment. -->
18
 
19
  # 3_loa
20
 
21
- This model is a fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) on the billsum dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 2.1919
24
- - Rouge1: 0.1973
25
- - Rouge2: 0.1007
26
- - Rougel: 0.1708
27
- - Rougelsum: 0.1711
28
  - Gen Len: 19.0
29
 
30
  ## Model description
@@ -45,8 +44,8 @@ More information needed
45
 
46
  The following hyperparameters were used during training:
47
  - learning_rate: 2e-05
48
- - train_batch_size: 8
49
- - eval_batch_size: 8
50
  - seed: 42
51
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
52
  - lr_scheduler_type: linear
@@ -54,33 +53,32 @@ The following hyperparameters were used during training:
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
58
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
59
- | No log | 1.0 | 124 | 2.2622 | 0.1909 | 0.0921 | 0.1656 | 0.1659 | 19.0 |
60
- | No log | 2.0 | 248 | 2.2534 | 0.1931 | 0.0956 | 0.1679 | 0.1681 | 19.0 |
61
- | No log | 3.0 | 372 | 2.2433 | 0.1952 | 0.0967 | 0.1697 | 0.1699 | 19.0 |
62
- | No log | 4.0 | 496 | 2.2358 | 0.1953 | 0.0978 | 0.1701 | 0.1702 | 19.0 |
63
- | 2.4755 | 5.0 | 620 | 2.2323 | 0.1951 | 0.0981 | 0.1705 | 0.1706 | 19.0 |
64
- | 2.4755 | 6.0 | 744 | 2.2253 | 0.1962 | 0.0996 | 0.1712 | 0.1714 | 19.0 |
65
- | 2.4755 | 7.0 | 868 | 2.2199 | 0.1968 | 0.1003 | 0.1719 | 0.172 | 19.0 |
66
- | 2.4755 | 8.0 | 992 | 2.2170 | 0.1963 | 0.0999 | 0.1717 | 0.1717 | 19.0 |
67
- | 2.4416 | 9.0 | 1116 | 2.2134 | 0.1971 | 0.1002 | 0.1723 | 0.1724 | 19.0 |
68
- | 2.4416 | 10.0 | 1240 | 2.2069 | 0.1967 | 0.0995 | 0.1715 | 0.1716 | 19.0 |
69
- | 2.4416 | 11.0 | 1364 | 2.2053 | 0.1983 | 0.102 | 0.1729 | 0.1732 | 19.0 |
70
- | 2.4416 | 12.0 | 1488 | 2.2034 | 0.1976 | 0.1018 | 0.1722 | 0.1725 | 19.0 |
71
- | 2.4153 | 13.0 | 1612 | 2.1995 | 0.1985 | 0.1019 | 0.1725 | 0.1727 | 19.0 |
72
- | 2.4153 | 14.0 | 1736 | 2.1980 | 0.198 | 0.1016 | 0.1721 | 0.1722 | 19.0 |
73
- | 2.4153 | 15.0 | 1860 | 2.1961 | 0.1983 | 0.1017 | 0.172 | 0.1721 | 19.0 |
74
- | 2.4153 | 16.0 | 1984 | 2.1947 | 0.1977 | 0.1013 | 0.1715 | 0.1717 | 19.0 |
75
- | 2.4069 | 17.0 | 2108 | 2.1936 | 0.1976 | 0.101 | 0.1714 | 0.1716 | 19.0 |
76
- | 2.4069 | 18.0 | 2232 | 2.1925 | 0.1977 | 0.1013 | 0.1713 | 0.1715 | 19.0 |
77
- | 2.4069 | 19.0 | 2356 | 2.1918 | 0.1973 | 0.1007 | 0.1709 | 0.1711 | 19.0 |
78
- | 2.4069 | 20.0 | 2480 | 2.1919 | 0.1973 | 0.1007 | 0.1708 | 0.1711 | 19.0 |
79
 
80
 
81
  ### Framework versions
82
 
83
- - PEFT 0.4.0
84
  - Transformers 4.31.0
85
  - Pytorch 1.13.1.post200
86
  - Datasets 2.10.0
 
1
  ---
2
  license: apache-2.0
3
+ base_model: google/flan-t5-large
4
  tags:
5
  - generated_from_trainer
6
  datasets:
 
10
  model-index:
11
  - name: 3_loa
12
  results: []
 
13
  ---
14
 
15
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
17
 
18
  # 3_loa
19
 
20
+ This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the billsum dataset.
21
  It achieves the following results on the evaluation set:
22
+ - Loss: 1.5159
23
+ - Rouge1: 0.2005
24
+ - Rouge2: 0.1122
25
+ - Rougel: 0.1739
26
+ - Rougelsum: 0.1738
27
  - Gen Len: 19.0
28
 
29
  ## Model description
 
44
 
45
  The following hyperparameters were used during training:
46
  - learning_rate: 2e-05
47
+ - train_batch_size: 1
48
+ - eval_batch_size: 1
49
  - seed: 42
50
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
  - lr_scheduler_type: linear
 
53
 
54
  ### Training results
55
 
56
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
57
+ |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
58
+ | 2.0741 | 1.0 | 989 | 1.7085 | 0.2064 | 0.1145 | 0.1771 | 0.1771 | 19.0 |
59
+ | 1.8521 | 2.0 | 1978 | 1.6510 | 0.2021 | 0.109 | 0.1744 | 0.1743 | 19.0 |
60
+ | 1.7753 | 3.0 | 2967 | 1.6182 | 0.2015 | 0.1099 | 0.1742 | 0.1742 | 19.0 |
61
+ | 1.7481 | 4.0 | 3956 | 1.5940 | 0.1995 | 0.1102 | 0.1736 | 0.1737 | 19.0 |
62
+ | 1.6966 | 5.0 | 4945 | 1.5771 | 0.1999 | 0.1112 | 0.1739 | 0.1738 | 19.0 |
63
+ | 1.7107 | 6.0 | 5934 | 1.5629 | 0.1974 | 0.1091 | 0.1721 | 0.1721 | 19.0 |
64
+ | 1.6905 | 7.0 | 6923 | 1.5527 | 0.1993 | 0.1091 | 0.1737 | 0.1737 | 19.0 |
65
+ | 1.6341 | 8.0 | 7912 | 1.5475 | 0.1994 | 0.11 | 0.1732 | 0.1731 | 19.0 |
66
+ | 1.6649 | 9.0 | 8901 | 1.5422 | 0.1978 | 0.109 | 0.1726 | 0.1722 | 19.0 |
67
+ | 1.6338 | 10.0 | 9890 | 1.5339 | 0.2009 | 0.1125 | 0.1748 | 0.1744 | 19.0 |
68
+ | 1.6545 | 11.0 | 10879 | 1.5310 | 0.201 | 0.1138 | 0.1759 | 0.1757 | 19.0 |
69
+ | 1.6617 | 12.0 | 11868 | 1.5323 | 0.2026 | 0.1152 | 0.1762 | 0.1761 | 19.0 |
70
+ | 1.629 | 13.0 | 12857 | 1.5245 | 0.202 | 0.1143 | 0.1752 | 0.1751 | 19.0 |
71
+ | 1.6202 | 14.0 | 13846 | 1.5214 | 0.2021 | 0.1138 | 0.1752 | 0.1751 | 19.0 |
72
+ | 1.6127 | 15.0 | 14835 | 1.5206 | 0.2013 | 0.113 | 0.1746 | 0.1743 | 19.0 |
73
+ | 1.6072 | 16.0 | 15824 | 1.5171 | 0.1991 | 0.1112 | 0.1731 | 0.1727 | 19.0 |
74
+ | 1.6032 | 17.0 | 16813 | 1.5180 | 0.1997 | 0.1126 | 0.1737 | 0.1735 | 19.0 |
75
+ | 1.6103 | 18.0 | 17802 | 1.5169 | 0.1999 | 0.1128 | 0.1741 | 0.1738 | 19.0 |
76
+ | 1.5956 | 19.0 | 18791 | 1.5160 | 0.2008 | 0.1128 | 0.1743 | 0.174 | 19.0 |
77
+ | 1.5981 | 20.0 | 19780 | 1.5159 | 0.2005 | 0.1122 | 0.1739 | 0.1738 | 19.0 |
78
 
79
 
80
  ### Framework versions
81
 
 
82
  - Transformers 4.31.0
83
  - Pytorch 1.13.1.post200
84
  - Datasets 2.10.0