Papers: arxiv:2303.03915

The BigScience ROOTS Corpus: A 1.6TB Composite Multilingual Dataset

Leandro Von Werra ,
Chenghao Mou ,
Huu Nguyen ,
Jörg Frohberg ,
Gerard Dupont ,
Francesco De Toni ,
Olivier Nguyen ,
Somaieh Nikpoor ,
Maraim Masoud ,
Pierre Colombo ,
Paulo Villegas
·published on Mar 7


As language models grow ever larger, the need for large-scale high-quality text datasets has never been more pressing, especially in multilingual settings. The BigScience workshop, a 1-year international and multidisciplinary initiative, was formed with the goal of researching and training large language models as a values-driven undertaking, putting issues of ethics, harm, and governance in the foreground. This paper documents the data creation and curation efforts undertaken by BigScience to assemble the Responsible Open-science Open-collaboration Text Sources (ROOTS) corpus, a 1.6TB dataset spanning 59 languages that was used to train the 176-billion-parameter BigScience Large Open-science Open-access Multilingual (BLOOM) language model. We further release a large initial subset of the corpus and analyses thereof, and hope to empower large-scale monolingual and multilingual modeling projects with both the data and the processing tools, as well as stimulate research around this large multilingual corpus.


Sign up or log in to comment

Datasets citing this paper 2

Spaces citing this paper 1