LEESIHYUN commited on
Commit
98412a2
1 Parent(s): e66b5c1

Training complete!

Browse files
Files changed (3) hide show
  1. README.md +10 -8
  2. generation_config.json +1 -1
  3. tokenizer.json +11 -2
README.md CHANGED
@@ -2,6 +2,8 @@
2
  base_model: google/pegasus-cnn_dailymail
3
  tags:
4
  - generated_from_trainer
 
 
5
  model-index:
6
  - name: pegasus-samsum
7
  results: []
@@ -12,7 +14,7 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # pegasus-samsum
14
 
15
- This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on an unknown dataset.
16
  It achieves the following results on the evaluation set:
17
  - Loss: 1.4833
18
 
@@ -46,14 +48,14 @@ The following hyperparameters were used during training:
46
 
47
  ### Training results
48
 
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:-----:|:----:|:---------------:|
51
- | 1.6599 | 0.54 | 500 | 1.4833 |
52
 
53
 
54
  ### Framework versions
55
 
56
- - Transformers 4.38.2
57
- - Pytorch 2.2.1+cu121
58
- - Datasets 2.18.0
59
- - Tokenizers 0.15.2
 
2
  base_model: google/pegasus-cnn_dailymail
3
  tags:
4
  - generated_from_trainer
5
+ datasets:
6
+ - samsum
7
  model-index:
8
  - name: pegasus-samsum
9
  results: []
 
14
 
15
  # pegasus-samsum
16
 
17
+ This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 1.4833
20
 
 
48
 
49
  ### Training results
50
 
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:------:|:----:|:---------------:|
53
+ | 1.6599 | 0.5430 | 500 | 1.4833 |
54
 
55
 
56
  ### Framework versions
57
 
58
+ - Transformers 4.42.4
59
+ - Pytorch 2.3.1+cu121
60
+ - Datasets 2.20.0
61
+ - Tokenizers 0.19.1
generation_config.json CHANGED
@@ -9,5 +9,5 @@
9
  "min_length": 32,
10
  "num_beams": 8,
11
  "pad_token_id": 0,
12
- "transformers_version": "4.38.2"
13
  }
 
9
  "min_length": 32,
10
  "num_beams": 8,
11
  "pad_token_id": 0,
12
+ "transformers_version": "4.42.4"
13
  }
tokenizer.json CHANGED
@@ -2,11 +2,20 @@
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
- "max_length": 128,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
- "padding": null,
 
 
 
 
 
 
 
 
 
10
  "added_tokens": [
11
  {
12
  "id": 0,
 
2
  "version": "1.0",
3
  "truncation": {
4
  "direction": "Right",
5
+ "max_length": 1024,
6
  "strategy": "LongestFirst",
7
  "stride": 0
8
  },
9
+ "padding": {
10
+ "strategy": {
11
+ "Fixed": 1024
12
+ },
13
+ "direction": "Right",
14
+ "pad_to_multiple_of": null,
15
+ "pad_id": 0,
16
+ "pad_type_id": 0,
17
+ "pad_token": "<pad>"
18
+ },
19
  "added_tokens": [
20
  {
21
  "id": 0,