DunnBC22 commited on
Commit
2e608d8
1 Parent(s): d6db132

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +14 -6
README.md CHANGED
@@ -6,9 +6,6 @@ model-index:
6
  results: []
7
  ---
8
 
9
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
10
- should probably proofread and complete it, then remove this comment. -->
11
-
12
  # pegasus-multi_news-NewsSummarization_BBC
13
 
14
  This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on the None dataset.
@@ -19,14 +16,21 @@ More information needed
19
 
20
  ## Intended uses & limitations
21
 
22
- More information needed
 
 
 
23
 
24
  ## Training and evaluation data
25
 
26
- More information needed
27
 
28
  ## Training procedure
29
 
 
 
 
 
30
  ### Training hyperparameters
31
 
32
  The following hyperparameters were used during training:
@@ -43,11 +47,15 @@ The following hyperparameters were used during training:
43
 
44
  ### Training results
45
 
 
46
 
 
 
 
47
 
48
  ### Framework versions
49
 
50
  - Transformers 4.21.3
51
  - Pytorch 1.12.1
52
  - Datasets 2.4.0
53
- - Tokenizers 0.12.1
 
6
  results: []
7
  ---
8
 
 
 
 
9
  # pegasus-multi_news-NewsSummarization_BBC
10
 
11
  This model is a fine-tuned version of [google/pegasus-multi_news](https://huggingface.co/google/pegasus-multi_news) on the None dataset.
 
16
 
17
  ## Intended uses & limitations
18
 
19
+ I used this to improve my skillset. I thank all of authors of the different technologies and
20
+ dataset(s) for their contributions that have this possible. I am not too worried about getting
21
+ credit for my part, but make sure to properly cite the authors of the different technologies
22
+ and dataset(s) as they absolutely deserve credit for their contributions.
23
 
24
  ## Training and evaluation data
25
 
26
+ Dataset Source: https://www.kaggle.com/datasets/pariza/bbc-news-summary
27
 
28
  ## Training procedure
29
 
30
+ Here is the link to the script that I created to train this project:
31
+
32
+ https://github.com/DunnBC22/NLP_Projects/blob/main/Text%20Summarization/Text_Summarization_BBC_News-Pegasus.ipynb
33
+
34
  ### Training hyperparameters
35
 
36
  The following hyperparameters were used during training:
 
47
 
48
  ### Training results
49
 
50
+ Unfortunately, I did not set the metrics to automatically upload here. They are as follows:
51
 
52
+ | Training Loss | Epoch | Step | rouge1 | rouge2 | rougeL | rougeLsum |
53
+ |:-------------:|:-----:|:----:|:--------:|:--------:|:--------:|:----------:|
54
+ | 6.41979 | 2.0 | 214 | 0.584474 | 0.463574 | 0.408729 | 0.408431 |
55
 
56
  ### Framework versions
57
 
58
  - Transformers 4.21.3
59
  - Pytorch 1.12.1
60
  - Datasets 2.4.0
61
+ - Tokenizers 0.12.1