rstodden commited on
Commit
46d1d6c
1 Parent(s): 69df786

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +88 -44
README.md CHANGED
@@ -6,11 +6,14 @@ language:
6
  pretty_name: DEplain-APA
7
  size_categories:
8
  - 10K<n<100K
 
 
9
  ---
10
 
11
- # Dataset Card for DEplain-APA
 
12
 
13
- ## Table of Contents
14
  - [Dataset Description](#dataset-description)
15
  - [Dataset Summary](#dataset-summary)
16
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
@@ -34,46 +37,71 @@ size_categories:
34
  - [Citation Information](#citation-information)
35
  - [Contributions](#contributions)
36
 
37
- ## Dataset Description
38
 
39
  - **Repository:** [DEplain-APA zenodo repository](https://zenodo.org/record/7674560)
40
  - **Paper:** Regina Stodden, Momen Omar, and Laura Kallmeyer. 2023. ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939). In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics.
41
  - **Point of Contact:** [Regina Stodden](regina.stodden@hhu.de)
42
 
43
- ### Dataset Summary
44
 
45
- [DEplain-APA](https://zenodo.org/record/7674560) [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the training and evaluation of sentence and document simplification in German. All texts of this dataset are provided by the Austrian Press Agency. The simple-complex sentence pairs are manually aligned.
46
 
47
- ### Supported Tasks and Leaderboards
48
 
49
  The dataset supports the training and evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
50
 
51
- ### Languages
52
 
53
  The text in this dataset is in Austrian German (`de-at`).
54
 
55
- ### Domains
56
  All texts in this dataset are news data.
57
 
58
  ## Dataset Structure
59
 
60
- ### Data Access
61
 
62
  - The dataset is licensed with restricted access for only academic purposes. To download the dataset, please request access on [zenodo](https://zenodo.org/record/7674560).
63
 
64
- ### Data Instances
65
- - `document-simplification` configuration: an instance consists of an original document and one reference simplification.
66
- - `sentence-simplification` configuration: an instance consists of an original sentence and one manually aligned reference simplification.
67
-
68
-
69
- ### Data Fields
70
-
71
- - `original`: an original text from the source datasets written for people with German skills equal to CEFR level B1
72
- - `simplification`: a simplified text from the source datasets written for people with German skills equal to CEFR level A2
73
- - more metadata is added to the dataset
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
74
 
75
 
76
- ### Data Splits
77
 
78
  DEplain-APA is randomly split into a training, development and test set. The training set of the sentence-simplification configuration contains only texts of documents which are part of the training set of document-simplification configuration and the same for dev and test sets.
79
  The statistics are given below.
@@ -81,72 +109,88 @@ The statistics are given below.
81
 
82
  | | Train | Dev | Test | Total |
83
  | ----- | ------ | ------ | ---- | ----- |
84
- | Document Pairs | 387 | 48 | 48 | 483
85
- | Sentence Pairs | 10660 | 1231 | 1231 | 13122
86
 
 
87
 
88
  Here, more information on simplification operations will follow soon.
89
 
90
- ## Dataset Creation
91
 
92
- ### Curation Rationale
93
 
94
  DEplain-APA was created to improve the training and evaluation of German document and sentence simplification. The data is provided by the same data provided as for the APA-LHA data. In comparison to APA-LHA (automatic-aligned), the sentence pairs of DEplain-APA are all manually aligned. Further, DEplain-APA aligns the texts in language level B1 with the texts in A2, which result in mostly mild simplifications.
95
 
96
- Further DEplain-APA, contains parallel documents as well as parallel sentence pairs.
97
 
98
- ### Source Data
99
 
100
- #### Initial Data Collection and Normalization
101
 
102
- The original news texts (in CEFR level C2) were manually simplified by professional translators, i.e. capito – CFS GmbH, and provided to us by the Austrian Press Agency.
103
  All documents date back to 2019 to 2021.
104
  Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool TS-ANNO. The data was split into sentences using a German model of SpaCy.
105
 
106
- #### Who are the source language producers?
107
- The original news texts (in CEFR level C2) were manually simplified by professional translators, i.e. capito – CFS GmbH. No other demographic or compensation information is known.
108
 
109
- ### Annotations
110
 
111
- #### Annotation process
112
 
113
  The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema).
114
 
115
- #### Who are the annotators?
116
 
117
  The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence.
118
  They are not part of any target group of text simplification.
119
 
120
- ### Personal and Sensitive Information
121
 
122
  No sensitive data.
123
 
124
- ## Considerations for Using the Data
125
 
126
- ### Social Impact of Dataset
127
 
128
  Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model.
129
 
130
- ### Discussion of Biases
131
 
132
  No bias is known.
133
 
134
- ### Other Known Limitations
135
 
136
  The dataset is provided for research purposes only. Please check the dataset license for additional information.
137
 
138
- ## Additional Information
139
 
140
- ### Dataset Curators
141
 
142
  Researchers at the Heinrich-Heine-University Düsseldorf, Germany, developed DEplain-APA. This research is part of the PhD-program `Online Participation` supported by the North Rhine-Westphalian (German) funding scheme `Forschungskolleg`.
143
 
144
- ### Licensing Information
 
 
145
 
146
- [More Information Needed]
147
 
148
- ### Citation Information
149
 
150
- [More Information Needed]
151
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
152
  This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite).
 
6
  pretty_name: DEplain-APA
7
  size_categories:
8
  - 10K<n<100K
9
+ task_ids:
10
+ - text-simplification
11
  ---
12
 
13
+ # Dataset Statement for DEplain-APA
14
+ In the following, we provide a dataset for DEplain-APA (following Huggingface's data cards).
15
 
16
+ ### Table of Contents
17
  - [Dataset Description](#dataset-description)
18
  - [Dataset Summary](#dataset-summary)
19
  - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
 
37
  - [Citation Information](#citation-information)
38
  - [Contributions](#contributions)
39
 
40
+ ### Dataset Description
41
 
42
  - **Repository:** [DEplain-APA zenodo repository](https://zenodo.org/record/7674560)
43
  - **Paper:** Regina Stodden, Momen Omar, and Laura Kallmeyer. 2023. ["DEplain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification."](https://arxiv.org/abs/2305.18939). In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Toronto, Canada. Association for Computational Linguistics.
44
  - **Point of Contact:** [Regina Stodden](regina.stodden@hhu.de)
45
 
46
+ #### Dataset Summary
47
 
48
+ DEplain-APA [(Stodden et al., 2023)](https://arxiv.org/abs/2305.18939) is a dataset for the training and evaluation of sentence and document simplification in German. All texts of this dataset are provided by the Austrian Press Agency. The simple-complex sentence pairs are manually aligned.
49
 
50
+ #### Supported Tasks and Leaderboards
51
 
52
  The dataset supports the training and evaluation of `text-simplification` systems. Success in this task is typically measured using the [SARI](https://huggingface.co/metrics/sari) and [FKBLEU](https://huggingface.co/metrics/fkbleu) metrics described in the paper [Optimizing Statistical Machine Translation for Text Simplification](https://www.aclweb.org/anthology/Q16-1029.pdf).
53
 
54
+ #### Languages
55
 
56
  The text in this dataset is in Austrian German (`de-at`).
57
 
58
+ #### Domains
59
  All texts in this dataset are news data.
60
 
61
  ## Dataset Structure
62
 
63
+ #### Data Access
64
 
65
  - The dataset is licensed with restricted access for only academic purposes. To download the dataset, please request access on [zenodo](https://zenodo.org/record/7674560).
66
 
67
+ #### Data Instances
68
+ - `document-simplification` configuration: an instance consists of an original document and one reference simplification (in plain-text format).
69
+ - `sentence-simplification` configuration: an instance consists of original sentence(s) and one manually aligned reference simplification (inclusing one or more sentences).
70
+
71
+
72
+ #### Data Fields
73
+
74
+ | data field | data field description |
75
+ |-------------------------------------------------|-------------------------------------------------------------------------------------------------------|
76
+ | `original` | an original text from the source dataset |
77
+ | `simplification` | a simplified text from the source dataset |
78
+ | `pair_id` | document pair id |
79
+ | `complex_document_id ` (on doc-level) | id of complex document (-1) |
80
+ | `simple_document_id ` (on doc-level) | id of simple document (-0) |
81
+ | `original_id ` (on sent-level) | id of sentence(s) of the original text |
82
+ | `simplification_id ` (on sent-level) | id of sentence(s) of the simplified text |
83
+ | `domain ` | text domain of the document pair |
84
+ | `corpus ` | subcorpus name |
85
+ | `simple_url ` | origin URL of the simplified document |
86
+ | `complex_url ` | origin URL of the simplified document |
87
+ | `simple_level ` or `language_level_simple ` | required CEFR language level to understand the simplified document |
88
+ | `complex_level ` or `language_level_original ` | required CEFR language level to understand the original document |
89
+ | `simple_location_html ` | location on hard disk where the HTML file of the simple document is stored |
90
+ | `complex_location_html ` | location on hard disk where the HTML file of the original document is stored |
91
+ | `simple_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
92
+ | `complex_location_txt ` | location on hard disk where the content extracted from the HTML file of the simple document is stored |
93
+ | `alignment_location ` | location on hard disk where the alignment is stored |
94
+ | `simple_author ` | author (or copyright owner) of the simplified document |
95
+ | `complex_author ` | author (or copyright owner) of the original document |
96
+ | `simple_title ` | title of the simplified document |
97
+ | `complex_title ` | title of the original document |
98
+ | `license ` | license of the data |
99
+ | `last_access ` or `access_date` | data origin data or data when the HTML files were downloaded |
100
+ | `rater` | id of the rater who annotated the sentence pair |
101
+ | `alignment` | type of alignment, e.g., 1:1, 1:n, n:1 or n:m |
102
 
103
 
104
+ #### Data Splits
105
 
106
  DEplain-APA is randomly split into a training, development and test set. The training set of the sentence-simplification configuration contains only texts of documents which are part of the training set of document-simplification configuration and the same for dev and test sets.
107
  The statistics are given below.
 
109
 
110
  | | Train | Dev | Test | Total |
111
  | ----- | ------ | ------ | ---- | ----- |
112
+ | Document Pairs | 387 | 48 | 48 |483 |
113
+ | Sentence Pairs | 10660 | 1231 | 1231 | 13122|
114
 
115
+ Inter-Annotator-Agreement: 0.7497 (moderate).
116
 
117
  Here, more information on simplification operations will follow soon.
118
 
119
+ ### Dataset Creation
120
 
121
+ #### Curation Rationale
122
 
123
  DEplain-APA was created to improve the training and evaluation of German document and sentence simplification. The data is provided by the same data provided as for the APA-LHA data. In comparison to APA-LHA (automatic-aligned), the sentence pairs of DEplain-APA are all manually aligned. Further, DEplain-APA aligns the texts in language level B1 with the texts in A2, which result in mostly mild simplifications.
124
 
125
+ Further, DEplain-APA, contains parallel documents as well as parallel sentence pairs.
126
 
127
+ #### Source Data
128
 
129
+ ##### Initial Data Collection and Normalization
130
 
131
+ The original news texts (in CEFR level B2) were manually simplified by professional translators, i.e. capito – CFS GmbH, and provided to us by the Austrian Press Agency.
132
  All documents date back to 2019 to 2021.
133
  Two German native speakers have manually aligned the sentence pairs by using the text simplification annotation tool TS-ANNO. The data was split into sentences using a German model of SpaCy.
134
 
135
+ ##### Who are the source language producers?
136
+ The original news texts (in CEFR level B2) were manually simplified by professional translators, i.e. capito – CFS GmbH. No other demographic or compensation information is known.
137
 
138
+ #### Annotations
139
 
140
+ ##### Annotation process
141
 
142
  The instructions given to the annotators are available [here](https://github.com/rstodden/TS_annotation_tool/tree/master/annotation_schema).
143
 
144
+ ##### Who are the annotators?
145
 
146
  The annotators are two German native speakers, who are trained in linguistics. Both were at least compensated with the minimum wage of their country of residence.
147
  They are not part of any target group of text simplification.
148
 
149
+ #### Personal and Sensitive Information
150
 
151
  No sensitive data.
152
 
153
+ ### Considerations for Using the Data
154
 
155
+ #### Social Impact of Dataset
156
 
157
  Many people do not understand texts due to their complexity. With automatic text simplification methods, the texts can be simplified for them. Our new training data can benefit in training a TS model.
158
 
159
+ #### Discussion of Biases
160
 
161
  No bias is known.
162
 
163
+ #### Other Known Limitations
164
 
165
  The dataset is provided for research purposes only. Please check the dataset license for additional information.
166
 
167
+ ### Additional Information
168
 
169
+ #### Dataset Curators
170
 
171
  Researchers at the Heinrich-Heine-University Düsseldorf, Germany, developed DEplain-APA. This research is part of the PhD-program `Online Participation` supported by the North Rhine-Westphalian (German) funding scheme `Forschungskolleg`.
172
 
173
+ #### Licensing Information
174
+
175
+ The dataset (DEplain-APA) is provided for research purposes only. Please request access using the following form: [https://zenodo.org/record/7674560](https://zenodo.org/record/7674560).
176
 
177
+ #### Citation Information
178
 
179
+ If you use part of this work, please cite our paper:
180
 
 
181
 
182
+ ```
183
+ @inproceedings{stodden-etal-2023-deplain,
184
+ title = "{DE}-plain: A German Parallel Corpus with Intralingual Translations into Plain Language for Sentence and Document Simplification",
185
+ author = "Stodden, Regina and
186
+ Momen, Omar and
187
+ Kallmeyer, Laura",
188
+ booktitle = "Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics",
189
+ month = jul,
190
+ year = "2023",
191
+ address = "Toronto, Canada",
192
+ publisher = "Association for Computational Linguistics",
193
+ notes = "preprint: https://arxiv.org/abs/2305.18939",
194
+ }
195
+ ```
196
  This dataset card uses material written by [Juan Diego Rodriguez](https://github.com/juand-r) and [Yacine Jernite](https://github.com/yjernite).