File size: 2,791 Bytes
7d10079
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6006547
 
 
 
 
 
 
 
7d10079
 
 
6006547
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
  - split: test
    path: data/test-*
dataset_info:
  features:
  - name: wikidata_id
    dtype: string
  - name: text
    dtype: string
  - name: version_id
    dtype: string
  splits:
  - name: train
    num_bytes: 220855898
    num_examples: 109486
  - name: validation
    num_bytes: 12416304
    num_examples: 6173
  - name: test
    num_bytes: 12818380
    num_examples: 6219
  download_size: 150569852
  dataset_size: 246090582
license: cc-by-sa-4.0
task_categories:
- text-generation
language:
- da
pretty_name: Wiki40b-da
size_categories:
- 100K<n<1M
---
# Dataset Card for "wiki40b-da"

## Dataset Description

- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 150.57 MB
- **Size of the generated dataset:** 246.09 MB
- **Total amount of disk used:** 396.66 MB

### Dataset Summary

This dataset is an upload of the Danish part of the [Wiki40b dataset](https://aclanthology.org/2020.lrec-1.297), being a cleaned version of a dump of Wikipedia.

The dataset is identical in content to [this dataset on the Hugging Face Hub](https://huggingface.co/datasets/wiki40b), but that one requires both `apache_beam`, `tensorflow` and `mwparserfromhell`, which can lead to dependency issues since these are not compatible with several newer packages.

The training, validation and test splits are the original ones.


### Languages

The dataset is available in Danish (`da`).


## Dataset Structure

### Data Instances

- **Size of downloaded dataset files:** 150.57 MB
- **Size of the generated dataset:** 246.09 MB
- **Total amount of disk used:** 396.66 MB

An example from the dataset looks as follows.
```
{
 'wikidata_id': 'Q17341862',
 'text': "\n_START_ARTICLE_\nÆgyptiske tekstiler\n_START_PARAGRAPH_\nTekstiler havde mange (...)",
 'version_id': '9018011197452276273'
}
```

### Data Fields

The data fields are the same among all splits.

- `wikidata_id`: a `string` feature.
- `text`: a `string` feature.
- `version_id`: a `string` feature.


### Dataset Statistics

There are 109,486 samples in the training split, 6,173 samples in the validation split and 6,219 in the test split.

#### Document Length Distribution

![image/png](https://cdn-uploads.huggingface.co/production/uploads/60d368a613f774189902f555/dn-7_ugJObyF-CkD6XoO-.png)


## Additional Information

### Dataset Curators

[Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
Institute](https://alexandra.dk/) uploaded it to the Hugging Face Hub.

### Licensing Information

The dataset is licensed under the [CC-BY-SA
license](https://creativecommons.org/licenses/by-sa/4.0/).