ubaada commited on
Commit
24c4fe3
1 Parent(s): 17508c9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -17
README.md CHANGED
@@ -7,19 +7,15 @@ metrics:
7
  model-index:
8
  - name: lsg-bart-large-4096-booksum
9
  results: []
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
 
15
  # lsg-bart-large-4096-booksum
16
 
17
- This model is a fine-tuned version of [ubaada/lsg-bart-large-4096-booksum](https://huggingface.co/ubaada/lsg-bart-large-4096-booksum) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 2.0742
20
- - Rouge1: 0.4145
21
- - Rouge2: 0.0797
22
- - Rougel: 0.1541
23
 
24
  ## Model description
25
 
@@ -49,18 +45,10 @@ The following hyperparameters were used during training:
49
  - num_epochs: 3
50
  - mixed_precision_training: Native AMP
51
 
52
- ### Training results
53
-
54
- | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel |
55
- |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
56
- | 1.3801 | 1.0 | 1251 | 2.0441 | 0.4223 | 0.0811 | 0.1532 |
57
- | 1.2385 | 2.0 | 2502 | 2.0753 | 0.3995 | 0.0751 | 0.1512 |
58
- | 0.9542 | 3.0 | 3753 | 2.0742 | 0.4145 | 0.0797 | 0.1541 |
59
-
60
 
61
  ### Framework versions
62
 
63
  - Transformers 4.40.2
64
  - Pytorch 2.2.0
65
  - Datasets 2.19.1
66
- - Tokenizers 0.19.1
 
7
  model-index:
8
  - name: lsg-bart-large-4096-booksum
9
  results: []
10
+ datasets:
11
+ - ubaada/booksum-complete-cleaned
12
  ---
13
 
 
 
14
 
15
  # lsg-bart-large-4096-booksum
16
 
17
+ This model is a fine-tuned version of [ubaada/lsg-bart-large-4096-booksum](https://huggingface.co/ubaada/lsg-bart-large-4096-booksum) on an ubaada/booksum-complete-cleaned dataset.
18
+ Validation Loss (Subset of validation dataset) Loss: 2.0742
 
 
 
 
19
 
20
  ## Model description
21
 
 
45
  - num_epochs: 3
46
  - mixed_precision_training: Native AMP
47
 
 
 
 
 
 
 
 
 
48
 
49
  ### Framework versions
50
 
51
  - Transformers 4.40.2
52
  - Pytorch 2.2.0
53
  - Datasets 2.19.1
54
+ - Tokenizers 0.19.1