Gayathri142214002 commited on
Commit
fca0903
1 Parent(s): 8d2f6e8

Pegasus_paraphraser_Com_10

Browse files
Files changed (3) hide show
  1. README.md +12 -6
  2. generation_config.json +1 -1
  3. model.safetensors +1 -1
README.md CHANGED
@@ -14,6 +14,8 @@ should probably proofread and complete it, then remove this comment. -->
14
  # Pegasus_paraphraser_Com_10
15
 
16
  This model is a fine-tuned version of [Gayathri142214002/Pegasus_paraphraser_Com_9](https://huggingface.co/Gayathri142214002/Pegasus_paraphraser_Com_9) on an unknown dataset.
 
 
17
 
18
  ## Model description
19
 
@@ -33,22 +35,26 @@ More information needed
33
 
34
  The following hyperparameters were used during training:
35
  - learning_rate: 0.0001
36
- - train_batch_size: 16
37
  - eval_batch_size: 8
38
  - seed: 42
39
  - gradient_accumulation_steps: 8
40
- - total_train_batch_size: 128
41
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
42
  - lr_scheduler_type: linear
43
  - num_epochs: 4
44
 
45
  ### Training results
46
 
 
 
 
 
47
 
48
 
49
  ### Framework versions
50
 
51
- - Transformers 4.36.2
52
- - Pytorch 2.1.2+cu121
53
- - Datasets 2.16.1
54
- - Tokenizers 0.15.0
 
14
  # Pegasus_paraphraser_Com_10
15
 
16
  This model is a fine-tuned version of [Gayathri142214002/Pegasus_paraphraser_Com_9](https://huggingface.co/Gayathri142214002/Pegasus_paraphraser_Com_9) on an unknown dataset.
17
+ It achieves the following results on the evaluation set:
18
+ - Loss: 0.1777
19
 
20
  ## Model description
21
 
 
35
 
36
  The following hyperparameters were used during training:
37
  - learning_rate: 0.0001
38
+ - train_batch_size: 4
39
  - eval_batch_size: 8
40
  - seed: 42
41
  - gradient_accumulation_steps: 8
42
+ - total_train_batch_size: 32
43
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
44
  - lr_scheduler_type: linear
45
  - num_epochs: 4
46
 
47
  ### Training results
48
 
49
+ | Training Loss | Epoch | Step | Validation Loss |
50
+ |:-------------:|:-----:|:----:|:---------------:|
51
+ | 0.1512 | 1.84 | 500 | 0.1656 |
52
+ | 0.1346 | 3.68 | 1000 | 0.1777 |
53
 
54
 
55
  ### Framework versions
56
 
57
+ - Transformers 4.39.2
58
+ - Pytorch 2.2.2+cu121
59
+ - Datasets 2.18.0
60
+ - Tokenizers 0.15.2
generation_config.json CHANGED
@@ -8,5 +8,5 @@
8
  "max_length": 60,
9
  "num_beams": 8,
10
  "pad_token_id": 0,
11
- "transformers_version": "4.36.2"
12
  }
 
8
  "max_length": 60,
9
  "num_beams": 8,
10
  "pad_token_id": 0,
11
+ "transformers_version": "4.39.2"
12
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:3ab920ae5bdca70518d88a1b57ed8a39628d5c220dfb77ebb1b0953a551d8e87
3
  size 2275755748
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fb2030534d216b893a22d275b0dc598bdca1b0e21536de892c51920c6016c01d
3
  size 2275755748