File size: 2,614 Bytes
9118c11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
---
language:
- de
multilinguality:
- monolingual
license: cc-by-sa-4.0
size_categories:
- 100K<n<1M
task_categories:
- text-classification
pretty_name: Leipzig Corpora Wikipedia 2021 German
configs:
- config_name: default
  data_files:
  - split: 10k
    path: "10k.parquet"
  - split: 30k
    path: "30k.parquet"
  - split: 100k
    path: "100k.parquet"
  - split: 1mio
    path: "1mio.parquet"
---
## Leipzig Corpora Wikipedia 2021 German

This dataset contains different splits (between 10k and 1mio) from the german wikipedia 2021. The data were collected 2021.
Every element in the dataset is labeled as "neutral".

The source can be found [here](https://wortschatz.uni-leipzig.de/de/download/German)

## Citation

```
@inproceedings{goldhahn-etal-2012-building,
    title = "Building Large Monolingual Dictionaries at the {L}eipzig Corpora Collection: From 100 to 200 Languages",
    author = "Goldhahn, Dirk  and
      Eckart, Thomas  and
      Quasthoff, Uwe",
    editor = "Calzolari, Nicoletta  and
      Choukri, Khalid  and
      Declerck, Thierry  and
      Do{\u{g}}an, Mehmet U{\u{g}}ur  and
      Maegaard, Bente  and
      Mariani, Joseph  and
      Moreno, Asuncion  and
      Odijk, Jan  and
      Piperidis, Stelios",
    booktitle = "Proceedings of the Eighth International Conference on Language Resources and Evaluation ({LREC}'12)",
    month = may,
    year = "2012",
    address = "Istanbul, Turkey",
    publisher = "European Language Resources Association (ELRA)",
    url = "http://www.lrec-conf.org/proceedings/lrec2012/pdf/327_Paper.pdf",
    pages = "759--765",
    abstract = "The Leipzig Corpora Collection offers free online access to 136 monolingual dictionaries enriched with statistical information. In this paper we describe current advances of the project in collecting and processing text data automatically for a large number of languages. Our main interest lies in languages of “low density”, where only few text data exists online. The aim of this approach is to create monolingual dictionaries and statistical information for a high number of new languages and to expand the existing dictionaries, opening up new possibilities for linguistic typology and other research. Focus of this paper will be set on the infrastructure for the automatic acquisition of large amounts of monolingual text in many languages from various sources. Preliminary results of the collection of text data will be presented. The mainly language-independent framework for preprocessing, cleaning and creating the corpora and computing the necessary statistics will also be depicted.",
}
```