mrshu commited on
Commit
a9a20ba
·
verified ·
1 Parent(s): c7731f7

Update SMESum dataset

Browse files
Files changed (5) hide show
  1. .gitattributes +3 -0
  2. README.md +127 -0
  3. data/test.jsonl +3 -0
  4. data/train.jsonl +3 -0
  5. data/validation.jsonl +3 -0
.gitattributes CHANGED
@@ -57,3 +57,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ data/test.jsonl filter=lfs diff=lfs merge=lfs -text
61
+ data/train.jsonl filter=lfs diff=lfs merge=lfs -text
62
+ data/validation.jsonl filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pretty_name: SMESum
3
+ dataset_summary: The Slovak SME news summarization corpus.
4
+ tags:
5
+ - news
6
+ - summarization
7
+ - slovak
8
+ - slovak-language
9
+ task_categories:
10
+ - summarization
11
+ task_ids:
12
+ - summarization
13
+ language:
14
+ - sk
15
+ size_categories:
16
+ - 100K<n<1M
17
+ license: other
18
+ paper: https://aclanthology.org/2020.lrec-1.830
19
+ repository: https://github.com/NaiveNeuron/sme-sum
20
+ homepage: https://sme.sk
21
+ configs:
22
+ - config_name: default
23
+ description: Slovak news summarization split extracted from the SME archive.
24
+ ---
25
+
26
+ # Dataset Card for SMESum
27
+
28
+ ## Dataset Summary
29
+ SMESum is a deterministic reproduction of the Slovak news summarization corpus introduced by Šuppa and Adamec (2020). It contains Slovak news articles sourced from the SME news portal via the Internet Archive. Each example provides the full article (`document`) together with two short abstractive fields (`title`, `introduction`) that can be concatenated to form the gold summary, mirroring the setup described in the paper. The corpus is split into train/validation/test partitions of sizes 64,001/8,001/8,001 using a salted SHA-256 hash of each filename to guarantee reproducibility.
30
+
31
+ ## Supported Tasks and Leaderboards
32
+
33
+ - `summarization`: Abstractive or extractive summarization of Slovak news articles. The original paper benchmarks multiple extractive baselines, including TextRank and a multilingual BERT model fine-tuned for extractive summarization.
34
+ - `classification`: The classification of content into categories / SME section labels (e.g. `sport`).
35
+
36
+ ## Languages
37
+
38
+ - Slovak (`slk`, ISO 639-3: `slk`). Text retains original punctuation, casing, and diacritics.
39
+
40
+ ## Dataset Structure
41
+
42
+ ### Data Instances
43
+
44
+ Each row is a JSON object with the following schema:
45
+
46
+ ```json
47
+ {
48
+ "title": "<headline of the article>",
49
+ "introduction": "<short abstract shown below the headline>",
50
+ "document": "<full body text of the article>",
51
+ "category": "<SME section label, e.g. 'sport'>",
52
+ "url": "<Wayback Machine URL pointing to the captured article>"
53
+ }
54
+ ```
55
+
56
+ To reproduce the summarization target described in the paper, concatenate `title` and `introduction`.
57
+
58
+ ### Data Fields
59
+
60
+ - `title` (`string`): Article headline authored by SME editors.
61
+ - `introduction` (`string`): Teaser/abstract (one or two sentences).
62
+ - `document` (`string`): Full article text, as scraped from the archived page.
63
+ - `category` (`string`): SME topical section (e.g., `domov`, `svet`, `sport`, `ekonomika`).
64
+ - `url` (`string`): Internet Archive URL of the captured article.
65
+
66
+ ### Data Splits
67
+
68
+ | Split | Records | Avg. words (document) | Avg. sentences (document) | Avg. words (summary) | Avg. sentences (summary) |
69
+ |-------------|---------|------------------------|----------------------------|-----------------------|---------------------------|
70
+ | train | 64,001 | 339.09 | 18.08 | 23.61 | 2.16 |
71
+ | validation | 8,001 | 344.99 | 18.18 | 23.58 | 2.16 |
72
+ | test | 8,001 | 332.25 | 17.96 | 23.46 | 2.15 |
73
+
74
+ Statistics replicate Table 2 in Šuppa and Adamec (2020).
75
+
76
+ ### Loading with `datasets`
77
+
78
+ ```python
79
+ from datasets import load_dataset
80
+
81
+ dataset = load_dataset("NaiveNeuron/SMESum")
82
+ sample = dataset["train"][0]
83
+ print(sample["title"])
84
+ print(sample["introduction"])
85
+ print(sample["document"])
86
+ ```
87
+
88
+ For local development, you can run the loader against the repository checkout:
89
+
90
+ ```python
91
+ dataset = load_dataset("./SMESum")
92
+ ```
93
+
94
+ ## Data Preprocessing
95
+
96
+ Source articles originate from the [`NaiveNeuron/sme-sum`](https://github.com/NaiveNeuron/sme-sum) utilities, which scrape SME.sk snapshots from the Wayback Machine. Each `.data` file is a UTF-8 encoded JSON payload with the fields above. This project orders filenames deterministically via `sha256(salt + filename)` (with salt `xsum-sme-split-v1`) and selects exactly 64,001/8,001/8,001 entries for train/validation/test. No additional cleaning, tokenization, or normalization beyond what the original crawl performed is applied.
97
+
98
+ ## Data Collection
99
+
100
+ - **Source**: SME.sk, a major Slovak news portal. Articles were harvested from archived snapshots hosted by the Internet Archive.
101
+ - **Timeframe**: Articles span multiple years leading up to late 2019, in line with the crawl described in the paper.
102
+ - **Selection criteria**: Paid-content stubs and incomplete articles were excluded. Categories cover general news, world affairs, business, sports, travel, tech, culture, and opinion.
103
+
104
+ ## Citation
105
+
106
+ ```
107
+ @inproceedings{suppa-adamec-2020-sme,
108
+ title = {A Summarization Dataset of Slovak News Articles},
109
+ author = {Marek {\v{S}}uppa and Jergu{\v{s}} Adamec},
110
+ booktitle = {Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)},
111
+ year = {2020},
112
+ pages = {6725--6730},
113
+ address = {Marseille, France},
114
+ publisher = {European Language Resources Association},
115
+ url = {https://aclanthology.org/2020.lrec-1.830}
116
+ }
117
+ ```
118
+
119
+ ## Dataset Curators
120
+
121
+ The deterministic split script and packaging in this repository were prepared by the maintainers of the SMESum project. The original crawl and dataset definition were authored by Marek Šuppa and Jerguš Adamec (Comenius University in Bratislava).
122
+
123
+ ## Licensing Information
124
+
125
+ - **Original content**: © Petit Press, used under fair-use/academic research assumptions.
126
+ - **Paper**: Licensed under CC-BY-NC (LREC Proceedings).
127
+ - **This split**: Scripts and JSONL artifacts follow the repository’s license (MIT unless noted otherwise).
data/test.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2006c9f88ab0e3f426b24ee80d6147badd730db03dfc37a6080e06c5adb5d9de
3
+ size 25217245
data/train.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39e468f3ebb1117597bbe972c79590bb8552f93d19c2362bf0be3bb7801a3577
3
+ size 201997101
data/validation.jsonl ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fe9fd6b75105430b790fe8747c4dff2cbbc2edf742bf5491990c067d2cf5cb4e
3
+ size 25097506