Datasets:

Modalities:
Text
Formats:
json
ArXiv:
Libraries:
Datasets
Dask
License:
RaymondAISG commited on
Commit
0ed0ca2
1 Parent(s): 2aba7af

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +108 -0
README.md CHANGED
@@ -2,4 +2,112 @@
2
  license: other
3
  license_name: other
4
  license_link: LICENSE
 
 
 
 
 
 
 
 
 
 
 
5
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: other
3
  license_name: other
4
  license_link: LICENSE
5
+ language:
6
+ - zh
7
+ - vi
8
+ - id
9
+ - ms
10
+ - tl
11
+ - my
12
+ - th
13
+ - lo
14
+ - km
15
+ - ta
16
  ---
17
+
18
+ # SEA-LION-Pile
19
+
20
+ SEA-LION is a collection of Large Language Models (LLMs) which has been pretrained and instruct-tuned for the Southeast Asia (SEA) region.
21
+ This repository contains the cleaned mC4 portion of the whole SEA-LION-Pile used to train the SEA-LION family of models.
22
+
23
+ ## Dataset Details
24
+
25
+ SEA-LION was trained on 980B tokens of the following data:
26
+
27
+ | Data Source | Unique Tokens | Multiplier | Total Tokens | Percentage |
28
+ |---------------------------|:-------------:|:----------:|:------------:|:----------:|
29
+ | RefinedWeb - English | 571.3B | 1 | 571.3B | 58.20% |
30
+ | mC4 - Chinese | 91.2B | 1 | 91.2B | 9.29% |
31
+ | mC4 - Indonesian | 3.68B | 4 | 14.7B | 1.50% |
32
+ | mC4 - Malay | 0.72B | 4 | 2.9B | 0.29% |
33
+ | mC4 - Filipino | 1.32B | 4 | 5.3B | 0.54% |
34
+ | mC4 - Burmese | 1.2B | 4 | 4.9B | 0.49% |
35
+ | mC4 - Vietnamese | 63.4B | 1 | 63.4B | 6.46% |
36
+ | mC4 - Thai | 5.8B | 2 | 11.6B | 1.18% |
37
+ | WangChanBERTa - Thai | 5B | 2 | 10B | 1.02% |
38
+ | mC4 - Lao | 0.27B | 4 | 1.1B | 0.12% |
39
+ | mC4 - Khmer | 0.97B | 4 | 3.9B | 0.40% |
40
+ | mC4 - Tamil | 2.55B | 4 | 10.2B | 1.04% |
41
+ | the Stack - Python | 20.9B | 2 | 41.8B | 4.26% |
42
+ | the Stack - Javascript | 55.6B | 1 | 55.6B | 5.66% |
43
+ | the Stack - Shell | 1.2B5 | 2 | 2.5B | 0.26% |
44
+ | the Stack - SQL | 6.4B | 2 | 12.8B | 1.31% |
45
+ | the Stack - Markdown | 26.6B | 1 | 26.6B | 2.71% |
46
+ | RedPajama - StackExchange | 21.2B | 1 | 21.2B | 2.16% |
47
+ | RedPajama - ArXiv | 30.6B | 1 | 30.6B | 3.12% |
48
+
49
+
50
+ ### Non mC4 data source locations
51
+
52
+ - [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
53
+ - [the Stack (Python, Javascript, Shell, SQL, Markdown)](https://huggingface.co/datasets/bigcode/the-stack-dedup)
54
+ - [RedPajama (StackExchange, ArXiv)](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T)
55
+ - WangChanBERTa
56
+ - [scb_mt_enth_2020](https://huggingface.co/datasets/scb_mt_enth_2020)
57
+ - [prachathai67k](https://huggingface.co/datasets/prachathai67k)
58
+ - [thaisum](https://huggingface.co/datasets/thaisum)
59
+ - [Opus - bible-uedin](https://opus.nlpl.eu/bible-uedin.php)
60
+ - [Opus - Tanzil](https://opus.nlpl.eu/Tanzil.php)
61
+ - [Opus - Opensubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php)
62
+ - [Opus - QED](https://opus.nlpl.eu/QED.php)
63
+ - [Opus - Ted2020](https://opus.nlpl.eu/TED2020.php)
64
+ - [Opus - Oscar](https://oscar-project.org/post/news-23-01)
65
+
66
+ ### Limitations
67
+
68
+ - As toxic or biased data is prevalent on the internet, it is likely our dataset contains such content.
69
+ - Despite our best efforts to filter content that does not qualify as natural language, and to deduplicate documents, our pipeline may let through documents that may be considered as errors or redundant.
70
+
71
+ ### License
72
+
73
+ This public extract of mC4 is made available under [ODC-By 1.0](https://opendatacommons.org/licenses/by/1-0/) license; users should also abide to the [CommonCrawl ToU](https://commoncrawl.org/terms-of-use/).
74
+
75
+ For all other licenses, please refer to their individual pages above.
76
+
77
+ ## Citations
78
+
79
+ ```bibtex
80
+ @misc{lowphansirikul2021wangchanberta,
81
+ title={WangchanBERTa: Pretraining transformer-based Thai Language Models},
82
+ author={Lalita Lowphansirikul and Charin Polpanumas and Nawat Jantrakulchai and Sarana Nutanong},
83
+ year={2021},
84
+ eprint={2101.09635},
85
+ archivePrefix={arXiv},
86
+ primaryClass={cs.CL}
87
+ }
88
+
89
+ @article{refinedweb,
90
+ title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
91
+ author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
92
+ journal={arXiv preprint arXiv:2306.01116},
93
+ eprint={2306.01116},
94
+ eprinttype = {arXiv},
95
+ url={https://arxiv.org/abs/2306.01116},
96
+ year={2023}
97
+ }
98
+
99
+ @article{Kocetkov2022TheStack,
100
+ title={The Stack: 3 TB of permissively licensed source code},
101
+ author={Kocetkov, Denis and Li, Raymond and Ben Allal, Loubna and Li, Jia and Mou,Chenghao and Muñoz Ferrandis, Carlos and Jernite, Yacine and Mitchell, Margaret and Hughes, Sean and Wolf, Thomas and Bahdanau, Dzmitry and von Werra, Leandro and de Vries, Harm},
102
+ journal={Preprint},
103
+ year={2022}
104
+ }
105
+
106
+ @software{together2023redpajama,
107
+ author = {Together Computer},
108
+ title = {RedPajama: An Open Source Recipe to Reproduce LLaMA training dataset},
109
+ month = April,
110
+ year = 2023,
111
+ url = {https://github.com/togethercomputer/RedPajama-Data}
112
+ }
113
+ ```