Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Sub-tasks:
topic-classification
Languages:
German
Size:
10K - 100K
License:
File size: 6,785 Bytes
17c3445 0651869 17c3445 0651869 89df2b4 17c3445 323fb06 f1309a8 ed44855 f1309a8 eef942e f1309a8 b0be5ed eef942e b0be5ed eef942e 17c3445 32b6ca2 17c3445 32b6ca2 17c3445 19079da 17c3445 f3ae7ce 17c3445 1e00d94 19079da f1309a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 |
---
annotations_creators:
- crowdsourced
language_creators:
- found
language:
- de
license:
- cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- 10K<n<100K
source_datasets:
- extended|other-from-One-Million-Posts-Corpus
task_categories:
- text-classification
task_ids:
- topic-classification
pretty_name: 10k German News Articles Datasets
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': Web
'1': Panorama
'2': International
'3': Wirtschaft
'4': Sport
'5': Inland
'6': Etat
'7': Wissenschaft
'8': Kultur
splits:
- name: train
num_bytes: 24418220
num_examples: 9245
- name: test
num_bytes: 2756401
num_examples: 1028
download_size: 17244356
dataset_size: 27174621
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for 10k German News Articles Datasets
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [Dataset Creation](#dataset-creation)
- [Curation Rationale](#curation-rationale)
- [Source Data](#source-data)
- [Annotations](#annotations)
- [Personal and Sensitive Information](#personal-and-sensitive-information)
- [Considerations for Using the Data](#considerations-for-using-the-data)
- [Social Impact of Dataset](#social-impact-of-dataset)
- [Discussion of Biases](#discussion-of-biases)
- [Other Known Limitations](#other-known-limitations)
- [Additional Information](#additional-information)
- [Dataset Curators](#dataset-curators)
- [Licensing Information](#licensing-information)
- [Citation Information](#citation-information)
- [Contributions](#contributions)
## Dataset Description
- **Homepage:** [10k German News Article Dataset](https://tblock.github.io/10kGNAD/)
- **Repository:** [10k German News Article Dataset](https://github.com/tblock/10kGNAD)
- **Point of Contact:** [Steven Liu](stevhliu@gmail.com)
### Dataset Summary
The 10k German News Article Dataset consists of 10273 German language news articles from the online Austrian
newspaper website DER Standard. Each news article has been classified into one of 9 categories by professional
forum moderators employed by the newspaper. This dataset is extended from the original
[One Million Posts Corpus](https://ofai.github.io/million-post-corpus/). The dataset was created to support
topic classification in German because a classifier effective on a English dataset may not be as effective on
a German dataset due to higher inflections and longer compound words. Additionally, this dataset can be used
as a benchmark dataset for German topic classification.
### Supported Tasks and Leaderboards
This dataset can be used to train a model, like [BERT](https://huggingface.co/bert-base-uncased) for `topic classification` on German news articles. There are 9 possible categories.
### Languages
The text is in German and it comes from an online Austrian newspaper website. The BCP-47 code for German is
`de-DE`.
## Dataset Structure
### Data Instances
An example data instance contains a German news article (title and article are concatenated) and it's corresponding topic category.
```
{'text': ''Die Gewerkschaft GPA-djp lanciert den "All-in-Rechner" und findet, dass die Vertragsform auf die Führungsebene beschränkt gehört. Wien – Die Gewerkschaft GPA-djp sieht Handlungsbedarf bei sogenannten All-in-Verträgen.'
'label': 'Wirtschaft'
}
```
### Data Fields
* `text`: contains the title and content of the article
* `label`: can be one of 9 possible topic categories (`Web`, `Panorama`, `International`, `Wirtschaft`, `Sport`, `Inland`, `Etat`, `Wissenschaft`, `Kultur`)
### Data Splits
The data is split into a training set consisting of 9245 articles and a test set consisting of 1028 articles.
## Dataset Creation
### Curation Rationale
The dataset was created to support topic classification in the German language. English text classification datasets are common ([AG News](https://huggingface.co/datasets/ag_news) and [20 Newsgroup](https://huggingface.co/datasets/newsgroup)), but German datasets are less common. A classifier trained on an English dataset may not work as well on a set of German text due to grammatical differences. Thus there is a need for a German dataset for effectively assessing model performance.
### Source Data
#### Initial Data Collection and Normalization
The 10k German News Article Dataset is extended from the One Million Posts Corpus. 10273 German news articles were collected from this larger corpus. In the One Million Posts Corpus, each article has a topic path like
`Newsroom/Wirtschaft/Wirtschaftpolitik/Finanzmaerkte/Griechenlandkrise`. The 10kGNAD uses the second part of the topic path as the topic label. Article title and texts are concatenated into one text and author names are removed to avoid keyword classification on authors who write frequently on a particular topic.
#### Who are the source language producers?
The language producers are the authors of the Austrian newspaper website DER Standard.
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
This dataset was curated by Timo Block.
### Licensing Information
This dataset is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 license.
### Citation Information
Please consider citing the authors of the "One Million Post Corpus" if you use the dataset.:
```
@InProceedings{Schabus2017,
Author = {Dietmar Schabus and Marcin Skowron and Martin Trapp},
Title = {One Million Posts: A Data Set of German Online Discussions},
Booktitle = {Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR)},
Pages = {1241--1244},
Year = {2017},
Address = {Tokyo, Japan},
Doi = {10.1145/3077136.3080711},
Month = aug
}
```
### Contributions
Thanks to [@stevhliu](https://github.com/stevhliu) for adding this dataset. |