GlycerinLOL commited on
Commit
5a56727
1 Parent(s): 42e567f

Model save

Browse files
README.md ADDED
@@ -0,0 +1,75 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: google/pegasus-xsum
3
+ tags:
4
+ - generated_from_trainer
5
+ metrics:
6
+ - rouge
7
+ - precision
8
+ - recall
9
+ - f1
10
+ model-index:
11
+ - name: LLM_Teached_Pegasus_50k
12
+ results: []
13
+ ---
14
+
15
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
+ should probably proofread and complete it, then remove this comment. -->
17
+
18
+ # LLM_Teached_Pegasus_50k
19
+
20
+ This model is a fine-tuned version of [google/pegasus-xsum](https://huggingface.co/google/pegasus-xsum) on an unknown dataset.
21
+ It achieves the following results on the evaluation set:
22
+ - Loss: 1.7193
23
+ - Rouge1: 0.4541
24
+ - Rouge2: 0.2071
25
+ - Rougel: 0.3708
26
+ - Rougelsum: 0.3708
27
+ - Gen Len: 26.4531
28
+ - Precision: 0.9082
29
+ - Recall: 0.9061
30
+ - F1: 0.907
31
+
32
+ ## Model description
33
+
34
+ More information needed
35
+
36
+ ## Intended uses & limitations
37
+
38
+ More information needed
39
+
40
+ ## Training and evaluation data
41
+
42
+ More information needed
43
+
44
+ ## Training procedure
45
+
46
+ ### Training hyperparameters
47
+
48
+ The following hyperparameters were used during training:
49
+ - learning_rate: 2e-05
50
+ - train_batch_size: 32
51
+ - eval_batch_size: 16
52
+ - seed: 42
53
+ - gradient_accumulation_steps: 4
54
+ - total_train_batch_size: 128
55
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
56
+ - lr_scheduler_type: linear
57
+ - num_epochs: 4
58
+ - mixed_precision_training: Native AMP
59
+
60
+ ### Training results
61
+
62
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | Precision | Recall | F1 |
63
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|:---------:|:------:|:------:|
64
+ | No log | 1.0 | 390 | 1.8258 | 0.4338 | 0.1906 | 0.3496 | 0.3498 | 26.2967 | 0.9049 | 0.9023 | 0.9034 |
65
+ | 2.1621 | 2.0 | 781 | 1.7537 | 0.4449 | 0.2005 | 0.3633 | 0.3633 | 26.2727 | 0.9068 | 0.9044 | 0.9054 |
66
+ | 1.8794 | 3.0 | 1172 | 1.7268 | 0.4518 | 0.2061 | 0.3696 | 0.3695 | 26.4345 | 0.9078 | 0.9058 | 0.9066 |
67
+ | 1.8271 | 3.99 | 1560 | 1.7193 | 0.4541 | 0.2071 | 0.3708 | 0.3708 | 26.4531 | 0.9082 | 0.9061 | 0.907 |
68
+
69
+
70
+ ### Framework versions
71
+
72
+ - Transformers 4.36.0
73
+ - Pytorch 2.0.1+cu117
74
+ - Datasets 2.14.5
75
+ - Tokenizers 0.15.0
generation_config.json ADDED
@@ -0,0 +1,11 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token_id": 0,
3
+ "decoder_start_token_id": 0,
4
+ "eos_token_id": 1,
5
+ "forced_eos_token_id": 1,
6
+ "length_penalty": 0.6,
7
+ "max_length": 64,
8
+ "num_beams": 8,
9
+ "pad_token_id": 0,
10
+ "transformers_version": "4.36.0"
11
+ }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9cd9f62d4fe7cb289202152192804ccceb19be1c27905f51981b25c5cb1832c9
3
  size 2279458540
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c3923e58b24b2e9b5c05712e656821e8aff20dac16a2f27801f9e5b872301bd9
3
  size 2279458540
runs/Mar02_10-19-21_oi5vv8ctr1709312124223-tkfr5/events.out.tfevents.1709345965.oi5vv8ctr1709312124223-tkfr5.13061.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a672a90e578d63c052e2e6b33f5e47e4a9ededd64a7c2f87d271d487d7d8bbc2
3
- size 7777
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:3952bcedbc028ffdee24efd3eb6c71babf30480348ff2e642f8a68459f75a925
3
+ size 8805