theojolliffe commited on
Commit
9db53ae
1 Parent(s): 933be9e

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - scientific_papers
7
+ metrics:
8
+ - rouge
9
+ model-index:
10
+ - name: bart-large-cnn-pubmed1o3-pubmed2o3
11
+ results:
12
+ - task:
13
+ name: Sequence-to-sequence Language Modeling
14
+ type: text2text-generation
15
+ dataset:
16
+ name: scientific_papers
17
+ type: scientific_papers
18
+ args: pubmed
19
+ metrics:
20
+ - name: Rouge1
21
+ type: rouge
22
+ value: 37.4586
23
+ ---
24
+
25
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
26
+ should probably proofread and complete it, then remove this comment. -->
27
+
28
+ # bart-large-cnn-pubmed1o3-pubmed2o3
29
+
30
+ This model is a fine-tuned version of [theojolliffe/bart-large-cnn-pubmed1o3](https://huggingface.co/theojolliffe/bart-large-cnn-pubmed1o3) on the scientific_papers dataset.
31
+ It achieves the following results on the evaluation set:
32
+ - Loss: 1.8817
33
+ - Rouge1: 37.4586
34
+ - Rouge2: 15.5572
35
+ - Rougel: 23.0686
36
+ - Rougelsum: 34.1522
37
+ - Gen Len: 138.379
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 2e-05
57
+ - train_batch_size: 2
58
+ - eval_batch_size: 2
59
+ - seed: 42
60
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
61
+ - lr_scheduler_type: linear
62
+ - num_epochs: 1
63
+ - mixed_precision_training: Native AMP
64
+
65
+ ### Training results
66
+
67
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
68
+ |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
69
+ | 1.9586 | 1.0 | 19988 | 1.8817 | 37.4586 | 15.5572 | 23.0686 | 34.1522 | 138.379 |
70
+
71
+
72
+ ### Framework versions
73
+
74
+ - Transformers 4.19.2
75
+ - Pytorch 1.11.0+cu113
76
+ - Datasets 2.2.2
77
+ - Tokenizers 0.12.1