Ameer05 commited on
Commit
eca7daf
1 Parent(s): b61f34a

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +73 -0
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - summarization
4
+ - generated_from_trainer
5
+ metrics:
6
+ - rouge
7
+ model-index:
8
+ - name: bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak
9
+ results: []
10
+ ---
11
+
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # bart-large-cnn-samsum-rescom-finetuned-resume-summarizer-9-epoch-tweak
16
+
17
+ This model is a fine-tuned version of [Ameer05/model-token-repo](https://huggingface.co/Ameer05/model-token-repo) on an unknown dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 1.4511
20
+ - Rouge1: 59.76
21
+ - Rouge2: 52.1999
22
+ - Rougel: 57.3631
23
+ - Rougelsum: 59.3075
24
+
25
+ ## Model description
26
+
27
+ More information needed
28
+
29
+ ## Intended uses & limitations
30
+
31
+ More information needed
32
+
33
+ ## Training and evaluation data
34
+
35
+ More information needed
36
+
37
+ ## Training procedure
38
+
39
+ ### Training hyperparameters
40
+
41
+ The following hyperparameters were used during training:
42
+ - learning_rate: 5e-05
43
+ - train_batch_size: 8
44
+ - eval_batch_size: 8
45
+ - seed: 42
46
+ - gradient_accumulation_steps: 4
47
+ - total_train_batch_size: 32
48
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
+ - lr_scheduler_type: linear
50
+ - num_epochs: 9
51
+ - mixed_precision_training: Native AMP
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
56
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
57
+ | No log | 0.91 | 5 | 2.0185 | 52.2186 | 45.4675 | 49.3152 | 51.9415 |
58
+ | No log | 1.91 | 10 | 1.6571 | 60.7728 | 52.8611 | 57.3487 | 60.1676 |
59
+ | No log | 2.91 | 15 | 1.5323 | 60.5674 | 52.2246 | 57.9846 | 60.073 |
60
+ | No log | 3.91 | 20 | 1.4556 | 61.2167 | 53.5087 | 58.9609 | 60.893 |
61
+ | 1.566 | 4.91 | 25 | 1.4632 | 62.918 | 55.4544 | 60.7116 | 62.6614 |
62
+ | 1.566 | 5.91 | 30 | 1.4360 | 60.4173 | 52.5859 | 57.8131 | 59.8864 |
63
+ | 1.566 | 6.91 | 35 | 1.4361 | 61.4273 | 53.9663 | 59.4445 | 60.9672 |
64
+ | 1.566 | 7.91 | 40 | 1.4477 | 60.3401 | 52.7276 | 57.7504 | 59.8209 |
65
+ | 0.6928 | 8.91 | 45 | 1.4511 | 59.76 | 52.1999 | 57.3631 | 59.3075 |
66
+
67
+
68
+ ### Framework versions
69
+
70
+ - Transformers 4.15.0
71
+ - Pytorch 1.9.1
72
+ - Datasets 1.18.4
73
+ - Tokenizers 0.10.3