Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
File size: 5,728 Bytes
e282c57
 
 
 
0ed0ca2
 
 
 
 
 
 
 
 
 
 
e282c57
0ed0ca2
 
 
7b84e20
 
 
 
 
0ed0ca2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7b84e20
 
 
0ed0ca2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e9a221e
0ed0ca2
 
 
 
 
e9a221e
0ed0ca2
 
 
 
 
 
e9a221e
 
 
a43f39a
0ed0ca2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
license: other
license_name: other
license_link: LICENSE
language:
- zh
- vi
- id
- ms
- tl
- my
- th
- lo
- km
- ta
---

# SEA-LION-Pile

SEA-LION-Pile is the pretraining data set for SEA-LION, a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region. 
This repository contains the cleaned mC4 portion of the SEA-LION-Pile.

For the remainder of the SEA-LION-Pile dataset, they may be downloaded from the links provided below.


## Dataset Details

SEA-LION was trained on 980B tokens of the following data:

| Data Source               | Unique Tokens | Multiplier | Total Tokens | Percentage |
|---------------------------|:-------------:|:----------:|:------------:|:----------:|
| RefinedWeb - English      |        571.3B |          1 |       571.3B |     58.20% |
| mC4 - Chinese             |         91.2B |          1 |        91.2B |      9.29% |
| mC4 - Indonesian          |         3.68B |          4 |        14.7B |      1.50% |
| mC4 - Malay               |         0.72B |          4 |         2.9B |      0.29% |
| mC4 - Filipino            |         1.32B |          4 |         5.3B |      0.54% |
| mC4 - Burmese             |          1.2B |          4 |         4.9B |      0.49% |
| mC4 - Vietnamese          |         63.4B |          1 |        63.4B |      6.46% |
| mC4 - Thai                |          5.8B |          2 |        11.6B |      1.18% |
| WangChanBERTa - Thai      |            5B |          2 |          10B |      1.02% |
| mC4 - Lao                 |         0.27B |          4 |         1.1B |      0.12% |
| mC4 - Khmer               |         0.97B |          4 |         3.9B |      0.40% |
| mC4 - Tamil               |         2.55B |          4 |        10.2B |      1.04% |
| the Stack - Python        |         20.9B |          2 |        41.8B |      4.26% |
| the Stack - Javascript    |         55.6B |          1 |        55.6B |      5.66% |
| the Stack - Shell         |         1.2B5 |          2 |         2.5B |      0.26% |
| the Stack - SQL           |         6.4B  |          2 |        12.8B |      1.31% |
| the Stack - Markdown      |         26.6B |          1 |        26.6B |      2.71% |
| RedPajama - StackExchange |         21.2B |          1 |        21.2B |      2.16% |
| RedPajama - ArXiv         |         30.6B |          1 |        30.6B |      3.12% |


### Additional SEA-LION-Pile (non-mC4) Data Sources

This section contains the links to the additional datasets that form the SEA-LION-Pile.

- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
- [the Stack (Python, Javascript, Shell, SQL, Markdown)](https://huggingface.co/datasets/bigcode/the-stack-dedup)
- [RedPajama (StackExchange, ArXiv)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
- WangChanBERTa
  - [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020)
  - [prachathai67k](https://huggingface.co/datasets/prachathai67k)
  - [thaisum](https://huggingface.co/datasets/thaisum)
  - [Opus - bible-uedin](https://opus.nlpl.eu/bible-uedin.php)
  - [Opus - Tanzil](https://opus.nlpl.eu/Tanzil.php)
  - [Opus - Opensubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php)
  - [Opus - QED](https://opus.nlpl.eu/QED.php)
  - [Opus - Ted2020](https://opus.nlpl.eu/TED2020.php)
  - [Opus - Oscar](https://oscar-project.org/post/news-23-01)


### Limitations

- As toxic or biased data is prevalent on the internet, it is likely our dataset contains such content.
- Despite our best efforts to filter content that does not qualify as natural language, and to deduplicate documents, our pipeline may let through documents that may be considered as errors or redundant.


### License

This public extract of mC4 is made available under [ODC-By 1.0](https://opendatacommons.org/licenses/by/1-0/) license; users should also abide to the [CommonCrawl ToU](https://commoncrawl.org/terms-of-use/).

For all other licenses, please refer to their individual pages above.

We endeavor to ensure data used is permissible and have chosen datasets from creators who have processes to exclude copyrighted or disputed data. For other new data, we have obtained permission to use and distribute.


## References

```bibtex
@misc{lowphansirikul2021wangchanberta,
    title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
    author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
    year={2021},
    eprint={2101.09635},
    archivePrefix={arXiv},
    primaryClass={cs.CL}
}

@article{refinedweb,
  title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
  author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
  journal={arXiv preprint arXiv:2306.01116},
  eprint={2306.01116},
  eprinttype = {arXiv},
  url={https://arxiv.org/abs/2306.01116},
  year={2023}
}

@article{Kocetkov2022TheStack,
  title={The Stack: 3 TB of permissively licensed source code},
  author={Kocetkov, Denis and Li, Raymond and Ben Allal, Loubna and Li, Jia and Mou,Chenghao and Muñoz Ferrandis, Carlos and Jernite, Yacine and Mitchell, Margaret and Hughes, Sean and Wolf, Thomas and Bahdanau, Dzmitry and von Werra, Leandro and de Vries, Harm},
  journal={Preprint},
  year={2022}
}

@software{together2023redpajama,
  author = {Together Computer},
  title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
  month = April,
  year = 2023,
  url = {https://github.com/togethercomputer/RedPajama-Data}
}
```