metadata
license: gpl-3.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: src
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22252479269
num_examples: 121165423
download_size: 16613737659
dataset_size: 22252479269
language:
- en
- ko
- fr
- aa
- hi
size_categories:
- 100M<n<1B
This dataset is built from the open source data accompanying "An Open Dataset and Model for Language Identification" (Burchell et al., 2023)
The repository containing the actual data can be found here : https://github.com/laurieburchell/open-lid-dataset.
The license for this recreation itself follows the original upstream dataset as GPLv3+.
However, individual datasets within it follow each of their own licenses.
Conversion to huggingface dataset and upload to hub done by Chris Ha
Original authors built the dataset for LID models for 201 languages. I thought such a dataset could also be used for a tokenizer for 201 languages.
This dataset was processed and uploaded using huggingface datasets.