Dennis Aumiller commited on
Commit
469eac8
1 Parent(s): a71943c

Adding initial data release.

Browse files

This includes a preliminary draft of the dataset card, as well as a JSON-converted version of our dataset.

Files changed (6) hide show
  1. .gitattributes +4 -0
  2. .gitignore +1 -0
  3. README.md +215 -0
  4. data/test.json +3 -0
  5. data/train.json +3 -0
  6. data/validation.json +3 -0
.gitattributes CHANGED
@@ -25,3 +25,7 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
25
  *.zip filter=lfs diff=lfs merge=lfs -text
26
  *.zstandard filter=lfs diff=lfs merge=lfs -text
27
  *tfevents* filter=lfs diff=lfs merge=lfs -text
28
+ data/ filter=lfs diff=lfs merge=lfs -text
29
+ data/test.json filter=lfs diff=lfs merge=lfs -text
30
+ data/train.json filter=lfs diff=lfs merge=lfs -text
31
+ data/validation.json filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1 @@
 
1
+ .idea/
README.md ADDED
@@ -0,0 +1,215 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - found
4
+ - expert-generated
5
+ language_creators:
6
+ - found
7
+ - machine-generated
8
+ languages:
9
+ - de-DE
10
+ licenses:
11
+ - cc-by-sa-4.0
12
+ multilinguality:
13
+ - monolingual
14
+ pretty_name: Klexikon
15
+ size_categories:
16
+ - unknown
17
+ source_datasets:
18
+ - original
19
+ task_categories:
20
+ - conditional-text-generation
21
+ task_ids:
22
+ - summarization
23
+ - text-simplification
24
+ paperswithcode_id: klexikon
25
+ ---
26
+
27
+ # Dataset Card for the Klexikon Dataset
28
+
29
+ ## Table of Contents
30
+ - [Dataset Description](#dataset-description)
31
+ - [Dataset Summary](#dataset-summary)
32
+ - [Supported Tasks](#supported-tasks-and-leaderboards)
33
+ - [Languages](#languages)
34
+ - [Dataset Structure](#dataset-structure)
35
+ - [Data Instances](#data-instances)
36
+ - [Data Fields](#data-instances)
37
+ - [Data Splits](#data-instances)
38
+ - [Dataset Creation](#dataset-creation)
39
+ - [Curation Rationale](#curation-rationale)
40
+ - [Source Data](#source-data)
41
+ - [Annotations](#annotations)
42
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
43
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
44
+ - [Social Impact of Dataset](#social-impact-of-dataset)
45
+ - [Discussion of Biases](#discussion-of-biases)
46
+ - [Other Known Limitations](#other-known-limitations)
47
+ - [Additional Information](#additional-information)
48
+ - [Dataset Curators](#dataset-curators)
49
+ - [Licensing Information](#licensing-information)
50
+ - [Citation Information](#citation-information)
51
+
52
+ ## Dataset Description
53
+
54
+ - **Homepage:** [N/A]
55
+ - **Repository:** [Klexikon repository](https://github.com/dennlinger/klexikon)
56
+ - **Paper:** [Klexikon: A German Dataset for Joint Summarization and Simplification](https://arxiv.org/abs/2201.07198)
57
+ - **Leaderboard:** [N/A]
58
+ - **Point of Contact:** [Dennis Aumiller](mailto:dennis.aumiller@gmail.com)
59
+
60
+ ### Dataset Summary
61
+
62
+ The Klexikon dataset is a German resource of document-aligned texts between German Wikipedia and the children's lexicon "Klexikon". The dataset was created for the purpose of joint text simplification and summarization, and contains almost 2900 aligned article pairs.
63
+ Notably, the children's articles use a simpler language than the original Wikipedia articles; this is in addition to a clear length discrepancy between the source (Wikipedia) and target (Klexikon) domain.
64
+
65
+ ### Supported Tasks and Leaderboards
66
+
67
+ - `summarization`: The dataset can be used to train a model for summarization. In particular, it poses a harder challenge than some of the commonly used datasets (CNN/DailyMail), which tend to suffer from positional biases in the source text. This makes it very easy to generate high (ROUGE) scoring solutions, by simply taking the leading 3 sentences. Our dataset provides a more challenging extraction task, combined with the additional difficulty of finding lexically appropriate simplifications.
68
+ - `simplification`: While not currently supported by the HF task board, text simplification is concerned with the appropriate representation of a text for disadvantaged readers (e.g., children, language learners, dyslexic,...).
69
+
70
+ For scoring, we ran preliminary experiments based on [ROUGE](https://huggingface.co/metrics/rouge), however, we want to cautiously point out that ROUGE is incapable of accurately depicting simplification appropriateness.
71
+ We combined this with looking at Flesch readability scores, as implemented by [textstat](https://github.com/shivam5992/textstat).
72
+ Note that simplification metrics such as [SARI](https://huggingface.co/metrics/sari) are not applicable here, since they require sentence alignments, which we do not provide.
73
+
74
+ ### Languages
75
+
76
+ The associated BCP-47 code is `de-DE`.
77
+
78
+ The text of the articles is in German. Klexikon articles are further undergoing a simple form of peer-review before publication, and aim to simplify language for 8-13 year old children. This means that the general expected text difficulty for Klexikon articles is lower than Wikipedia's entries.
79
+
80
+ ## Dataset Structure
81
+
82
+ ### Data Instances
83
+
84
+ One datapoint represents the Wikipedia text (`wiki_text`), as well as the Klexikon text (`klexikon_text`).
85
+ Sentences are separated by newlines for both datasets, and section headings are indicated by leading `==` (or `===` for subheadings, `====` for sub-subheading, etc.).
86
+ Further, it includes the `wiki_url` and `klexikon_url`, pointing to the respective source texts. Note that the original articles were extracted in April 2021, so re-crawling the texts yourself will likely change some content.
87
+ Lastly, we include a unique identifier `u_id` as well as the page title `title` of the Klexikon page.
88
+
89
+ Sample (abridged texts for clarity):
90
+ ```
91
+ {
92
+ "u_id": 0,
93
+ "title": "ABBA",
94
+ "wiki_url": "https://de.wikipedia.org/wiki/ABBA",
95
+ "klexikon_url": "https://klexikon.zum.de/wiki/ABBA",
96
+ "wiki_sentences": [
97
+ "ABBA ist eine schwedische Popgruppe, die aus den damaligen Paaren Agnetha Fältskog und Björn Ulvaeus sowie Benny Andersson und Anni-Frid Lyngstad besteht und sich 1972 in Stockholm formierte.",
98
+ "Sie gehört mit rund 400 Millionen verkauften Tonträgern zu den erfolgreichsten Bands der Musikgeschichte.",
99
+ "Bis in die 1970er Jahre hatte es keine andere Band aus Schweden oder Skandinavien gegeben, der vergleichbare Erfolge gelungen waren.",
100
+ "Trotz amerikanischer und britischer Dominanz im Musikgeschäft gelang der Band ein internationaler Durchbruch.",
101
+ "Sie hat die Geschichte der Popmusik mitgeprägt.",
102
+ "Zu ihren bekanntesten Songs zählen Mamma Mia, Dancing Queen und The Winner Takes It All.",
103
+ "1982 beendeten die Gruppenmitglieder aufgrund privater Differenzen ihre musikalische Zusammenarbeit.",
104
+ "Seit 2016 arbeiten die vier Musiker wieder zusammen an neuer Musik, die 2021 erscheinen soll.",
105
+ ],
106
+ "klexikon_sentences": [
107
+ "ABBA war eine Musikgruppe aus Schweden.",
108
+ "Ihre Musikrichtung war die Popmusik.",
109
+ "Der Name entstand aus den Anfangsbuchstaben der Vornamen der Mitglieder, Agnetha, Björn, Benny und Anni-Frid.",
110
+ "Benny Andersson und Björn Ulvaeus, die beiden Männer, schrieben die Lieder und spielten Klavier und Gitarre.",
111
+ "Anni-Frid Lyngstad und Agnetha Fältskog sangen."
112
+ ]
113
+ },
114
+ ```
115
+
116
+ ### Data Fields
117
+
118
+ * `u_id` (`int`): A unique identifier for each document pair in the dataset. 0-2349 are reserved for training data, 2350-2623 for testing, and 2364-2897 for validation.
119
+ * `title` (`str`): Title of the Klexikon page for this sample.
120
+ * `wiki_url` (`str`): URL of the associated Wikipedia article. Notably, this is non-trivial, since we potentially have disambiguated pages, where the Wikipedia title is not exactly the same as the Klexikon one.
121
+ * `klexikon_url` (`str`): URL of the Klexikon article.
122
+ * `wiki_text` (`List[str]`): List of sentences of the Wikipedia article. We prepare a pre-split document with spacy's sentence splitting (model: `de_core_news_md`). Additionally, please note that we do not include page contents outside of `<p>` tags, which excludes lists, captions and images.
123
+ * `klexikon_text` (`List[str]`): List of sentences of the Klexikon article. We apply the same processing as for the Wikipedia texts.
124
+
125
+ ### Data Splits
126
+
127
+ We provide a stratified split of the dataset, based on the length of the respective Wiki article/Klexikon article pair (according to number of sentences).
128
+ The x-axis represents the length of the Wikipedia article, and the y-axis the length of the Klexikon article.
129
+ We segment the coordinate systems into rectangles of shape `(100, 10)`, and randomly sample a split of 80/10/10 for training/validation/test from each rectangle to ensure stratification. In case of rectangles with less than 10 entries, we put all samples into training.
130
+
131
+ The final splits have the following size:
132
+ * 2350 samples for training
133
+ * 274 samples for validation
134
+ * 274 samples for testing
135
+
136
+ ## Dataset Creation
137
+
138
+ ### Curation Rationale
139
+
140
+ As previously described, the Klexikon resource was created as an attempt to bridge the two fields of text summarization and text simplification. Previous datasets suffer from either one or more of the following shortcomings:
141
+
142
+ * They primarily focus on input/output pairs of similar lengths, which does not reflect longer-form texts.
143
+ * Data exists primarily for English, and other languages are notoriously understudied.
144
+ * Alignments exist for sentence-level, but not document-level.
145
+
146
+ This dataset serves as a starting point to investigate the feasibility of end-to-end simplification systems for longer input documents.
147
+
148
+ ### Source Data
149
+
150
+ #### Initial Data Collection and Normalization
151
+
152
+ Data was collected from [Klexikon](klexikon.zum.de), and afterwards aligned with corresponding texts from [German Wikipedia](de.wikipedia.org).
153
+ Specifically, the collection process was performed in April 2021, and 3145 articles could be extracted from Klexikon back then. Afterwards, we semi-automatically align the articles with Wikipedia, by looking up articles with the same title.
154
+ For articles that do not exactly match, we manually review their content, and decide to match to an appropriate substitute if the content can be matched by at least 66% of the Klexikon paragraphs.
155
+ Similarly, we proceed to manually review disambiguation pages on Wikipedia.
156
+
157
+ We extract only full-text content, excluding figures, captions, and list elements from the final text corpus, and only retain articles for which the respective Wikipedia document consists of at least 15 paragraphs after pre-processing.
158
+
159
+ #### Who are the source language producers?
160
+
161
+ The language producers are contributors to Klexikon and Wikipedia. No demographic information was available from the data sources.
162
+
163
+ ### Annotations
164
+
165
+ #### Annotation process
166
+
167
+ Annotations were performed by manually reviewing the URLs of the ambiguous article pairs. No annotation platforms or existing tools were used in the process.
168
+ Otherwise, articles were matched based on the exact title.
169
+
170
+ #### Who are the annotators?
171
+
172
+ The manually aligned articles were reviewed by the dataset author (Dennis Aumiller).
173
+
174
+ ### Personal and Sensitive Information
175
+
176
+ Since Klexikon and Wikipedia are public encyclopedias, no further personal or sensitive information is included. We did not investigate to what extent information about public figures is included in the dataset.
177
+
178
+ ## Considerations for Using the Data
179
+
180
+ ### Social Impact of Dataset
181
+
182
+ Accessibility on the web is still a big issue, particularly for disadvantaged readers.
183
+ This dataset has the potential to strengthen text simplification systems, which can improve the situation.
184
+ In terms of language coverage, this dataset also has a beneficial impact on the availability of German data.
185
+
186
+ Potential negative biases include the problems of automatically aligned articles. The alignments may never be 100% perfect, and can therefore cause mis-aligned articles (or associations), despite the best of our intentions.
187
+
188
+ ### Discussion of Biases
189
+
190
+ We have not tested whether any particular bias towards a specific article *type* (i.e., "person", "city", etc.) exists.
191
+ Similarly, we attempted to present an unbiased (stratified) split for validation and test set, but given that we only cover around 2900 articles, it is possible that these articles represent a particular focal lense on the overall distribution of lexical content.
192
+
193
+ ### Other Known Limitations
194
+
195
+ Since the articles were written independently of each other, it is not guaranteed that there exists an exact coverage of each sentence in the simplified article, which could also stem from the fact that sometimes Wikipedia pages have separate article pages for aspects (e.g., the city of "Aarhus" has a separate page for its art museum (ARoS). However, Klexikon lists content and description for ARoS on the page of the city itself.
196
+
197
+ ## Additional Information
198
+
199
+ ### Dataset Curators
200
+
201
+ The dataset was curated only by the author of this dataset, Dennis Aumiller.
202
+
203
+ ### Licensing Information
204
+
205
+ Klexikon and Wikipedia make their textual contents available under the CC BY-SA license, which will be inherited for this dataset.
206
+
207
+ ### Citation Information
208
+
209
+ @article{aumiller-gertz-2022-klexikon,
210
+ title = {{Klexikon: A German Dataset for Joint Summarization and Simplification}}
211
+ author = {Aumiller, Dennis and Gertz, Michael},
212
+ year = {2022},
213
+ journal = {arXiv preprint arXiv:2201.07198},
214
+ url = {https://arxiv.org/abs/2201.07198},
215
+ }
data/test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a9ca98a61b6552c36525bfca0ed397dd50559573bf001f38d764aa84e8375212
3
+ size 9952151
data/train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:45ab17e6aed286ef5536667dd8a34acd09ff51fb942d72a775dd6284b2f2244e
3
+ size 99409427
data/validation.json ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7f704e740cd179a8db3cf741689c6b362b6560c01ee411e93532497c02cbcc35
3
+ size 9976273