GuysTrans commited on
Commit
90620b5
1 Parent(s): 1e7abef

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -14
README.md CHANGED
@@ -1,6 +1,5 @@
1
  ---
2
  license: apache-2.0
3
- base_model: facebook/bart-base
4
  tags:
5
  - generated_from_trainer
6
  metrics:
@@ -15,13 +14,13 @@ should probably proofread and complete it, then remove this comment. -->
15
 
16
  # bart-base-finetuned-xsum
17
 
18
- This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset.
19
  It achieves the following results on the evaluation set:
20
- - Loss: 1.8239
21
- - Rouge1: 10.4203
22
- - Rouge2: 5.4986
23
- - Rougel: 8.9779
24
- - Rougelsum: 9.9702
25
  - Gen Len: 20.0
26
 
27
  ## Model description
@@ -42,23 +41,32 @@ More information needed
42
 
43
  The following hyperparameters were used during training:
44
  - learning_rate: 2e-05
45
- - train_batch_size: 2
46
- - eval_batch_size: 2
47
  - seed: 42
48
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
49
  - lr_scheduler_type: linear
50
- - num_epochs: 1
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
55
  |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
56
- | 1.1654 | 1.0 | 2003 | 1.8239 | 10.4203 | 5.4986 | 8.9779 | 9.9702 | 20.0 |
 
 
 
 
 
 
 
 
 
57
 
58
 
59
  ### Framework versions
60
 
61
- - Transformers 4.31.0
62
- - Pytorch 2.0.1+cu118
63
- - Datasets 2.14.3
64
  - Tokenizers 0.13.3
 
1
  ---
2
  license: apache-2.0
 
3
  tags:
4
  - generated_from_trainer
5
  metrics:
 
14
 
15
  # bart-base-finetuned-xsum
16
 
17
+ This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
+ - Loss: 1.7802
20
+ - Rouge1: 10.2143
21
+ - Rouge2: 5.6684
22
+ - Rougel: 8.8677
23
+ - Rougelsum: 9.8692
24
  - Gen Len: 20.0
25
 
26
  ## Model description
 
41
 
42
  The following hyperparameters were used during training:
43
  - learning_rate: 2e-05
44
+ - train_batch_size: 8
45
+ - eval_batch_size: 8
46
  - seed: 42
47
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
48
  - lr_scheduler_type: linear
49
+ - num_epochs: 10
50
 
51
  ### Training results
52
 
53
  | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
54
  |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:------:|:---------:|:-------:|
55
+ | 3.1154 | 1.0 | 501 | 2.1511 | 10.4214 | 5.0073 | 8.8506 | 9.9896 | 19.982 |
56
+ | 2.1503 | 2.0 | 1002 | 1.9367 | 10.2207 | 5.631 | 8.9531 | 9.9404 | 20.0 |
57
+ | 1.9303 | 3.0 | 1503 | 1.8703 | 10.4496 | 5.8424 | 9.1 | 10.1692 | 20.0 |
58
+ | 1.8227 | 4.0 | 2004 | 1.8365 | 10.3195 | 5.6383 | 8.9427 | 10.0217 | 20.0 |
59
+ | 1.7561 | 5.0 | 2505 | 1.8137 | 10.3644 | 5.7409 | 8.9742 | 10.0328 | 20.0 |
60
+ | 1.6962 | 6.0 | 3006 | 1.7963 | 10.307 | 5.7619 | 8.9713 | 10.0001 | 20.0 |
61
+ | 1.6573 | 7.0 | 3507 | 1.7906 | 10.2633 | 5.6772 | 8.9086 | 9.9373 | 20.0 |
62
+ | 1.6357 | 8.0 | 4008 | 1.7808 | 10.3619 | 5.7546 | 9.0124 | 10.02 | 20.0 |
63
+ | 1.6269 | 9.0 | 4509 | 1.7808 | 10.2688 | 5.6934 | 8.934 | 9.9284 | 20.0 |
64
+ | 1.6031 | 10.0 | 5010 | 1.7802 | 10.2143 | 5.6684 | 8.8677 | 9.8692 | 20.0 |
65
 
66
 
67
  ### Framework versions
68
 
69
+ - Transformers 4.30.2
70
+ - Pytorch 2.0.0
71
+ - Datasets 2.1.0
72
  - Tokenizers 0.13.3