zwellington commited on
Commit
0fedc35
1 Parent(s): b756524

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +100 -0
README.md ADDED
@@ -0,0 +1,100 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model: facebook/bart-large
4
+ tags:
5
+ - generated_from_trainer
6
+ datasets:
7
+ - clupubhealth
8
+ metrics:
9
+ - rouge
10
+ model-index:
11
+ - name: bart-pubhealth-expanded-hi-grad
12
+ results:
13
+ - task:
14
+ name: Sequence-to-sequence Language Modeling
15
+ type: text2text-generation
16
+ dataset:
17
+ name: clupubhealth
18
+ type: clupubhealth
19
+ config: expanded
20
+ split: test
21
+ args: expanded
22
+ metrics:
23
+ - name: Rouge1
24
+ type: rouge
25
+ value: 30.2592
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # bart-pubhealth-expanded-hi-grad
32
+
33
+ This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on the clupubhealth dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 2.0581
36
+ - Rouge1: 30.2592
37
+ - Rouge2: 11.7027
38
+ - Rougel: 24.1706
39
+ - Rougelsum: 24.3596
40
+ - Gen Len: 19.95
41
+
42
+ ## Model description
43
+
44
+ More information needed
45
+
46
+ ## Intended uses & limitations
47
+
48
+ More information needed
49
+
50
+ ## Training and evaluation data
51
+
52
+ More information needed
53
+
54
+ ## Training procedure
55
+
56
+ ### Training hyperparameters
57
+
58
+ The following hyperparameters were used during training:
59
+ - learning_rate: 2e-05
60
+ - train_batch_size: 16
61
+ - eval_batch_size: 8
62
+ - seed: 42
63
+ - gradient_accumulation_steps: 950
64
+ - total_train_batch_size: 15200
65
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
66
+ - lr_scheduler_type: linear
67
+ - num_epochs: 10
68
+
69
+ ### Training results
70
+
71
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
72
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
73
+ | 3.7893 | 0.49 | 2 | 2.3943 | 20.5187 | 5.4764 | 15.9378 | 16.2797 | 20.0 |
74
+ | 3.4045 | 0.98 | 4 | 2.1599 | 24.0858 | 7.8207 | 19.0412 | 19.1609 | 19.88 |
75
+ | 3.2488 | 1.47 | 6 | 2.1026 | 27.3466 | 9.369 | 21.1419 | 21.3136 | 19.865 |
76
+ | 3.1823 | 1.96 | 8 | 2.1324 | 28.825 | 9.6007 | 22.0963 | 22.3776 | 19.82 |
77
+ | 3.1263 | 2.44 | 10 | 2.1105 | 29.2694 | 10.5001 | 23.2842 | 23.5473 | 19.85 |
78
+ | 3.0834 | 2.93 | 12 | 2.0837 | 28.5975 | 10.2016 | 22.048 | 22.1341 | 19.915 |
79
+ | 3.0283 | 3.42 | 14 | 2.0773 | 28.5813 | 10.447 | 22.7456 | 22.8496 | 19.91 |
80
+ | 3.0301 | 3.91 | 16 | 2.0730 | 30.1049 | 11.4375 | 24.083 | 24.3045 | 19.945 |
81
+ | 2.9851 | 4.4 | 18 | 2.0775 | 29.2224 | 10.2722 | 22.7019 | 23.0038 | 19.95 |
82
+ | 2.9769 | 4.89 | 20 | 2.0777 | 29.6981 | 10.7044 | 23.2487 | 23.5232 | 19.96 |
83
+ | 2.9623 | 5.38 | 22 | 2.0711 | 29.0438 | 10.5105 | 23.1751 | 23.415 | 19.92 |
84
+ | 2.9421 | 5.87 | 24 | 2.0676 | 29.096 | 10.6599 | 23.1381 | 23.3765 | 19.985 |
85
+ | 2.9234 | 6.36 | 26 | 2.0646 | 29.6561 | 10.9096 | 23.2384 | 23.4265 | 19.985 |
86
+ | 2.9107 | 6.85 | 28 | 2.0616 | 29.7134 | 11.1686 | 23.272 | 23.4475 | 19.985 |
87
+ | 2.9077 | 7.33 | 30 | 2.0593 | 29.5055 | 11.0256 | 23.4406 | 23.6653 | 19.955 |
88
+ | 2.9072 | 7.82 | 32 | 2.0585 | 30.0504 | 11.433 | 23.9176 | 24.1728 | 19.95 |
89
+ | 2.8951 | 8.31 | 34 | 2.0583 | 29.9401 | 11.602 | 23.948 | 24.1323 | 19.95 |
90
+ | 2.8955 | 8.8 | 36 | 2.0584 | 30.1158 | 11.4745 | 24.0509 | 24.2465 | 19.94 |
91
+ | 2.8774 | 9.29 | 38 | 2.0582 | 30.0476 | 11.4465 | 23.8956 | 24.0527 | 19.945 |
92
+ | 2.8851 | 9.78 | 40 | 2.0581 | 30.2592 | 11.7027 | 24.1706 | 24.3596 | 19.95 |
93
+
94
+
95
+ ### Framework versions
96
+
97
+ - Transformers 4.31.0
98
+ - Pytorch 2.0.1+cu117
99
+ - Datasets 2.7.1
100
+ - Tokenizers 0.13.2