File size: 7,117 Bytes
ea10df7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2da75a7
 
 
 
 
 
 
 
 
 
 
 
 
 
7b93f3d
4519acd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
---
dataset_info:
  features:
  - name: text
    dtype: string
  - name: timestamp
    dtype: string
  - name: url
    dtype: string
  - name: source
    dtype: string
  splits:
  - name: swe
    num_bytes: 165856225313
    num_examples: 49709189
  - name: nor
    num_bytes: 77788663940
    num_examples: 18907310
  - name: dan
    num_bytes: 96599020220
    num_examples: 25429808
  - name: isl
    num_bytes: 9224688518
    num_examples: 2373560
  - name: nld
    num_bytes: 342228993872
    num_examples: 117392666
  - name: deu
    num_bytes: 1563101303688
    num_examples: 420017484
  - name: fin
    num_bytes: 121611691135
    num_examples: 30467667
  - name: est
    num_bytes: 34500545108
    num_examples: 8004753
  download_size: 1496468851078
  dataset_size: 2410911131794
configs:
- config_name: default
  data_files:
  - split: swe
    path: data/swe-*
  - split: nor
    path: data/nor-*
  - split: dan
    path: data/dan-*
  - split: isl
    path: data/isl-*
  - split: nld
    path: data/nld-*
  - split: deu
    path: data/deu-*
  - split: fin
    path: data/fin-*
  - split: est
    path: data/est-*
language:
- sv
- 'no'
- da
- is
- de
- fi
- et
size_categories:
- 1M<n<10M
- 10M<n<100M
- 100M<n<1B
source_datasets:
- uonlp/CulturaX
---




<div align="center">
    <h1> CulturaX </h1>
    <h3> Cleaned, Enormous, and Public: The Multilingual Fuel to Democratize Large Language Models for 167 Languages </h3>
</div>




## Dataset Description

This is a subset of the CulturaX dataset, retaining just the germanic (excluding english), finnish and estonian. 

 
- **Repository:** [https://github.com/nlp-uoregon/CulturaX](https://github.com/nlp-uoregon/CulturaX)
- **Papers:** [CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages](https://arxiv.org/abs/2309.09400)


## Dataset Summary

We present CulturaX, a substantial multilingual dataset with 6.3 trillion tokens in 167 languages, tailored for large language model (LLM) development. Our dataset undergoes meticulous cleaning and deduplication through a rigorous pipeline of multiple stages to accomplish the best quality for model training, including language identification, URL-based filtering, metric-based cleaning, document refinement, and data deduplication. We employ MinHash at document level to achieve fuzzy deduplication for the datasets in different languages. Our data cleaning framework includes diverse criteria and threshold selections, guided by extensive data samples, ensuring comprehensive noise filtering in various aspects. CulturaX is fully released to the public in HuggingFace to facilitate research and advancements in multilingual LLMs. 

Our dataset combines the most recent iteration of mC4 (version 3.1.0) [1] with all accessible OSCAR corpora up to the present year, including 20.19, 21.09, 22.01, and 23.01 [2]. After deep cleaning and deduplication, CulturaX involves 16TB data in the parquet format (expanding to 27TB when unpacked). More than a half of our dataset is dedicated to non-English languages to significantly boost the data size and enhance the feasibility of training models in multilingual scenarios.

To obtain perplexity scores for data cleaning, we train a SentencePiece tokenizer and 5-gram Kneser-Ney language models as provided in the KenLM library [3] using the 20230501 dumps of Wikipedia. Our KenLM models are also released in HuggingFace: https://huggingface.co/uonlp/kenlm.

Details for the dataset can be found in our technical paper: [https://arxiv.org/abs/2309.09400](https://arxiv.org/abs/2309.09400)


You can download the dataset using Hugging Face datasets:

*You may need to follow these instructions to setup authentication before downloading the dataset: [https://huggingface.co/docs/huggingface_hub/quick-start#login](https://huggingface.co/docs/huggingface_hub/quick-start#login)*

```python
from datasets import load_dataset
ds = load_dataset("uonlp/CulturaX",
                  "en",
                  use_auth_token=True)
```


### Languages

The supported languages and statistics for our dataset can be found below:

*(Note that the language code `als` and `eml` refer to `gsw` and `x-eml` in the OSCAR-2301 dataset.)* 



|     | Code   | Language                 | # Documents     | # Tokens            | # Tokens (%) |
|----:|:-------|:-------------------------|:----------------|:--------------------|:------|
|   3 | deu    | German                   | 420,017,484     | 357,030,348,021     | 64.10 |
|  10 | nld    | Dutch                    | 117,392,666     | 80,032,209,900      | 14.37 |
|  19 | swe    | Swedish                  | 49,709,189      | 38,486,181,494      | 6.91  |
|  21 | fin    | Finnish                  | 30,467,667      | 28,925,009,180      | 5.19  |
|  23 | dan    | Danish                   | 25,429,808      | 22,921,651,314      | 4.12  |
|  25 | nor    | Norwegian                | 18,907,310      | 18,426,628,868      | 3.31  |
|  33 | est    | Estonian                 | 8,004,753       | 8,805,656,165       | 1.58  |
|  45 | isl    | Icelandic                | 2,373,560       | 2,350,592,857       | 0.42  |



### Dataset Structure

```json
{
    "text": ...,
    "timestamp": ...,
    "url": ...,
    "source": "mc4" | "OSCAR-xxxx",
}
```



## Considerations for Using the Data

As CulturaX is the cleaned version of the mC4 and OSCAR datasets, which were both extracted from CommonCrawl, personal and sensitive information might still contain personal and sensitive information. 
This must be considered prior to using this dataset for any purpose, such as training deep learning models, etc.


## License Information

The licence terms for CulturaX strictly follows those of `mC4` and `OSCAR`. Please refer to both below licenses when using this dataset. 

- [mC4 license](https://huggingface.co/datasets/allenai/c4#license)
- [OSCAR license](https://huggingface.co/datasets/oscar-corpus/OSCAR-2301#licensing-information)


## Citation 

To cite CulturaX, please use:

```
@misc{nguyen2023culturax,
      title={CulturaX: A Cleaned, Enormous, and Multilingual Dataset for Large Language Models in 167 Languages}, 
      author={Thuat Nguyen and Chien Van Nguyen and Viet Dac Lai and Hieu Man and Nghia Trung Ngo and Franck Dernoncourt and Ryan A. Rossi and Thien Huu Nguyen},
      year={2023},
      eprint={2309.09400},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```


## Reference

[1] Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, and Colin Raffel. 2021. mT5: A massively multilingual
pre-trained text-to-text transformer. In NAACL 2021. https://huggingface.co/datasets/mc4

[2] Pedro Javier Ortiz Suárez, Benoît Sagot, and Laurent Romary. 2019. Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures. In Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-
7) 2019. https://oscar-project.org/

[3] KenLM: Faster and smaller language model queries. In Proceedings of the Sixth
Workshop on Statistical Machine Translation, 2011.