File size: 2,970 Bytes
75c3d1d
 
 
 
 
 
 
 
2058697
14a1f18
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a695d93
 
 
 
 
14a1f18
a695d93
 
 
14a1f18
 
3d3d27a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
---
task_categories:
- summarization
- text2text-generation
language:
- 'no'
- nb
size_categories:
- 10K<n<100K
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: url
    dtype: string
  - name: date_scraped
    dtype: string
  - name: headline
    dtype: string
  - name: category
    dtype: string
  - name: ingress
    dtype: string
  - name: article
    dtype: string
  splits:
  - name: train
    num_bytes: 26303219.28053567
    num_examples: 10874
  - name: validation
    num_bytes: 1981086.682983145
    num_examples: 819
  - name: test
    num_bytes: 3144582.036481182
    num_examples: 1300
  download_size: 19441287
  dataset_size: 31428888.0
---

# SNL Summarization Dataset

The source of this dataset is a web scrape of SNL (Store Norske Leksikon), a publicly owned Norwegian encyclopedia. Articles in SNL are structured so that the first para
graph (the lead) acts as a summary of the entire article.  

## Methodology

From our thesis:

We couldn’t find any existing datasets containing SNL data, so we decided to create our own by scraping articles from SNL.no. The first step involved gathering a list of all article URLs on the site. We extracted the URLs from the sitemaps and retained only those following the format ”https://snl.no/name of article” to avoid non-article pages. Next, we scraped the URLs with multiple threads downloading articles at the same time using the Python module grequests and parsed the received HTML using beautifulsoup4. We extracted the text from the lead and the rest of the article text, joining the latter while removing any whitespace. Additionally, we saved metadata such as URLs, headlines, and categories for each article. 
 
To filter out very short articles, we set criteria for keeping an article: the lead had
to be at least 100 characters long, and the rest of the article had to be longer than 400 characters.
Finally, we split the dataset using an 84%/6%/10% split for the train/validation/test sets. This
division was chosen to ensure a sufficient amount of data for training our models while still
providing an adequate sample size for validation and testing. By allocating a larger portion
(84%) of the data for training, our goal was to optimize the model’s learning process. We
allocated 6% of the data for validation, which was intended to help fine-tune the model and
its hyperparameters, while the remaining 10% was designated for the final evaluation of our
model’s performance on unseen data in the test set.


# License
Please refer to the license of SNL

# Citation
If you are using this dataset in your work, please cite our master thesis which this dataset was a part of
```
@mastersthesis{navjord2023beyond,
  title={Beyond extractive: advancing abstractive automatic text summarization in Norwegian with transformers},
  author={Navjord, J{\o}rgen Johnsen and Korsvik, Jon-Mikkel Ryen},
  year={2023},
  school={Norwegian University of Life Sciences, {\AA}s}
}
```