File size: 1,454 Bytes
92ab24b dc116ff b762663 4fbcb9b 92ab24b dc116ff 9333cfb dc116ff a699b0e 08e267e 4fbcb9b dc116ff |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 |
---
license: gpl-3.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: src
dtype: string
- name: lang
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 22252479269
num_examples: 121165423
download_size: 16613737659
dataset_size: 22252479269
language:
- en
- ko
- fr
- aa
- hi
size_categories:
- 100M<n<1B
---
This dataset is built from the open source data accompanying ["An Open Dataset and Model for Language Identification" (Burchell et al., 2023)](https://arxiv.org/abs/2305.13820)
The repository containing the actual data can be found here : https://github.com/laurieburchell/open-lid-dataset.
The license for this recreation itself follows the original upstream dataset as GPLv3+.
However, individual datasets within it follow [each of their own licenses.](https://github.com/laurieburchell/open-lid-dataset/blob/main/licenses.md)
The "src" column lists the sources. "lang" column lists the language code in alpha-3/ISO 639-2 format followed by the script. "text" column contains the sentence.
Conversion to huggingface dataset and upload to hub done by [Chris Ha](https://github.com/chris-ha458)
Original authors built the dataset for LID models for 201 languages. I thought such a dataset could also be used for a tokenizer for 201 languages.
This dataset was processed and uploaded using huggingface datasets. |