pszemraj commited on
Commit
6933cac
1 Parent(s): 4572805

Model save

Browse files
Files changed (3) hide show
  1. README.md +71 -0
  2. generation_config.json +15 -0
  3. model.safetensors +1 -1
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: bsd-3-clause
3
+ base_model: pszemraj/pegasus-x-large-book-summary
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - rouge
8
+ model-index:
9
+ - name: pegasus-x-large-book-summary-synthsumm-16384-v2
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # pegasus-x-large-book-summary-synthsumm-16384-v2
17
+
18
+ This model is a fine-tuned version of [pszemraj/pegasus-x-large-book-summary](https://huggingface.co/pszemraj/pegasus-x-large-book-summary) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 1.5481
21
+ - Rouge1: 48.141
22
+ - Rouge2: 19.1137
23
+ - Rougel: 33.647
24
+ - Rougelsum: 42.1211
25
+ - Gen Len: 73.9846
26
+
27
+ ## Model description
28
+
29
+ More information needed
30
+
31
+ ## Intended uses & limitations
32
+
33
+ More information needed
34
+
35
+ ## Training and evaluation data
36
+
37
+ More information needed
38
+
39
+ ## Training procedure
40
+
41
+ ### Training hyperparameters
42
+
43
+ The following hyperparameters were used during training:
44
+ - learning_rate: 0.0003
45
+ - train_batch_size: 1
46
+ - eval_batch_size: 1
47
+ - seed: 5309
48
+ - gradient_accumulation_steps: 8
49
+ - total_train_batch_size: 8
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: inverse_sqrt
52
+ - lr_scheduler_warmup_ratio: 0.03
53
+ - num_epochs: 2.0
54
+
55
+ ### Training results
56
+
57
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
58
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
59
+ | 1.7369 | 0.38 | 125 | 1.7140 | 43.0265 | 15.8613 | 30.5774 | 38.2507 | 77.0462 |
60
+ | 1.7736 | 0.77 | 250 | 1.6361 | 43.0209 | 15.2384 | 29.7678 | 37.4955 | 67.6 |
61
+ | 1.4251 | 1.15 | 375 | 1.5931 | 46.2138 | 17.5559 | 33.0091 | 41.0385 | 74.1077 |
62
+ | 1.2706 | 1.54 | 500 | 1.5635 | 44.6382 | 16.5917 | 30.7551 | 39.8466 | 71.7231 |
63
+ | 1.4844 | 1.92 | 625 | 1.5481 | 48.141 | 19.1137 | 33.647 | 42.1211 | 73.9846 |
64
+
65
+
66
+ ### Framework versions
67
+
68
+ - Transformers 4.36.0.dev0
69
+ - Pytorch 2.1.0
70
+ - Datasets 2.15.0
71
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 0,
4
+ "early_stopping": true,
5
+ "encoder_no_repeat_ngram_size": 4,
6
+ "eos_token_id": 1,
7
+ "forced_eos_token_id": 1,
8
+ "length_penalty": 0.8,
9
+ "max_length": 512,
10
+ "min_length": 8,
11
+ "no_repeat_ngram_size": 3,
12
+ "num_beams": 2,
13
+ "pad_token_id": 0,
14
+ "transformers_version": "4.36.0.dev0"
15
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:48f52e9ade94c71590e1f8ddef2fc1d6be00d851e44e72894f1574a16878f46b
3
  size 2274730128
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6f2f96439cffd2e4a8c883a46071965f7b6868a52cfbf05579be798a89199b10
3
  size 2274730128