DunnBC22 commited on
Commit
f9a0822
1 Parent(s): 924827e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -14,23 +14,23 @@ pipeline_tag: text2text-generation
14
 
15
  # bart-base-News_Summarization_CNN
16
 
17
- This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.1603
20
 
21
  ## Model description
22
 
23
- Using the dataset from the following link, I trained a text summarization model.
24
-
25
- https://www.kaggle.com/datasets/hadasu92/cnn-articles-after-basic-cleaning
26
 
27
  ## Intended uses & limitations
28
 
29
- I used this to improve my skillset. I thank all of authors of the different technologies and dataset(s) for their contributions that have this possible. I am not too worried about getting credit for my part, but make sure to properly cite the authors of the different technologies and dataset(s) as they absolutely deserve credit for their contributions.
 
 
30
 
31
  ## Training and evaluation data
32
 
33
- More information needed
34
 
35
  ## Training procedure
36
  CPU trained on all samples where the article length is less than 820 words and the summary length is no more than 52 words in length. Additionally, any sample that was missing a new article or summarization was removed. In all, 24,911 out of the possible 42,025 samples were used for training/testing/evaluation.
@@ -54,7 +54,7 @@ The following hyperparameters were used during training:
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | rouge1 | rouge2 | rougeL | rougeLsum |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:----------:|:----------:|
59
  | 0.7491 | 1.0 | 1089 | 0.1618 | N/A | N/A | N/A | N/A |
60
  | 0.1641 | 2.0 | 2178 | 0.1603 | 0.834343 | 0.793822 | 0.823824 | 0.823778 |
 
14
 
15
  # bart-base-News_Summarization_CNN
16
 
17
+ This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base).
18
  It achieves the following results on the evaluation set:
19
  - Loss: 0.1603
20
 
21
  ## Model description
22
 
23
+ For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Text%20Summarization/CNN%20News%20Text%20Summarization/CNN%20News%20Text%20Summarization.ipynb
 
 
24
 
25
  ## Intended uses & limitations
26
 
27
+ I used this to improve my skillset. I thank all of authors of the different technologies and dataset(s) for their contributions that have this possible.
28
+
29
+ Please make sure to properly cite the authors of the different technologies and dataset(s) as they absolutely deserve credit for their contributions.
30
 
31
  ## Training and evaluation data
32
 
33
+ Dataset Source: https://www.kaggle.com/datasets/hadasu92/cnn-articles-after-basic-cleaning
34
 
35
  ## Training procedure
36
  CPU trained on all samples where the article length is less than 820 words and the summary length is no more than 52 words in length. Additionally, any sample that was missing a new article or summarization was removed. In all, 24,911 out of the possible 42,025 samples were used for training/testing/evaluation.
 
54
 
55
  ### Training results
56
 
57
+ | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | RougeL | RougeLsum |
58
  |:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|:----------:|:----------:|
59
  | 0.7491 | 1.0 | 1089 | 0.1618 | N/A | N/A | N/A | N/A |
60
  | 0.1641 | 2.0 | 2178 | 0.1603 | 0.834343 | 0.793822 | 0.823824 | 0.823778 |