saattrupdan commited on
Commit
6006547
1 Parent(s): 7d10079

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -1
README.md CHANGED
@@ -28,7 +28,85 @@ dataset_info:
28
  num_examples: 6219
29
  download_size: 150569852
30
  dataset_size: 246090582
 
 
 
 
 
 
 
 
31
  ---
32
  # Dataset Card for "wiki40b-da"
33
 
34
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
28
  num_examples: 6219
29
  download_size: 150569852
30
  dataset_size: 246090582
31
+ license: cc-by-sa-4.0
32
+ task_categories:
33
+ - text-generation
34
+ language:
35
+ - da
36
+ pretty_name: Wiki40b-da
37
+ size_categories:
38
+ - 100K<n<1M
39
  ---
40
  # Dataset Card for "wiki40b-da"
41
 
42
+ ## Dataset Description
43
+
44
+ - **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
45
+ - **Size of downloaded dataset files:** 150.57 MB
46
+ - **Size of the generated dataset:** 246.09 MB
47
+ - **Total amount of disk used:** 396.66 MB
48
+
49
+ ### Dataset Summary
50
+
51
+ This dataset is an upload of the Danish part of the [Wiki40b dataset](https://aclanthology.org/2020.lrec-1.297), being a cleaned version of a dump of Wikipedia.
52
+
53
+ The dataset is identical in content to [this dataset on the Hugging Face Hub](https://huggingface.co/datasets/wiki40b), but that one requires both `apache_beam`, `tensorflow` and `mwparserfromhell`, which can lead to dependency issues since these are not compatible with several newer packages.
54
+
55
+ The training, validation and test splits are the original ones.
56
+
57
+
58
+ ### Languages
59
+
60
+ The dataset is available in Danish (`da`).
61
+
62
+
63
+ ## Dataset Structure
64
+
65
+ ### Data Instances
66
+
67
+ - **Size of downloaded dataset files:** 150.57 MB
68
+ - **Size of the generated dataset:** 246.09 MB
69
+ - **Total amount of disk used:** 396.66 MB
70
+
71
+ An example from the dataset looks as follows.
72
+ ```
73
+ {
74
+ 'wikidata_id': 'Q17341862',
75
+ 'text': "\n_START_ARTICLE_\nÆgyptiske tekstiler\n_START_PARAGRAPH_\nTekstiler havde mange (...)",
76
+ 'version_id': '9018011197452276273'
77
+ }
78
+ ```
79
+
80
+ ### Data Fields
81
+
82
+ The data fields are the same among all splits.
83
+
84
+ - `wikidata_id`: a `string` feature.
85
+ - `text`: a `string` feature.
86
+ - `version_id`: a `string` feature.
87
+
88
+
89
+ ### Dataset Statistics
90
+
91
+ There are 109,486 samples in the training split, 6,173 samples in the validation split and 6,219 in the test split.
92
+
93
+ #### Speakers
94
+
95
+ There are 539 unique speakers in the training dataset and 56 unique speakers in the test dataset, where 54 of them are also present in the training set.
96
+
97
+ #### Document Length Distribution
98
+
99
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/60d368a613f774189902f555/dn-7_ugJObyF-CkD6XoO-.png)
100
+
101
+
102
+ ## Additional Information
103
+
104
+ ### Dataset Curators
105
+
106
+ [Dan Saattrup Nielsen](https://saattrupdan.github.io/) from the [The Alexandra
107
+ Institute](https://alexandra.dk/) uploaded it to the Hugging Face Hub.
108
+
109
+ ### Licensing Information
110
+
111
+ The dataset is licensed under the [CC-BY-SA
112
+ license](https://creativecommons.org/licenses/by-sa/4.0/).