open-lid-dataset / README.md
hac541309's picture
Update README.md
9333cfb
metadata
license: gpl-3.0
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: src
      dtype: string
    - name: lang
      dtype: string
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 22252479269
      num_examples: 121165423
  download_size: 16613737659
  dataset_size: 22252479269
language:
  - en
  - ko
  - fr
  - aa
  - hi
size_categories:
  - 100M<n<1B

This dataset is built from the open source data accompanying "An Open Dataset and Model for Language Identification" (Burchell et al., 2023)

The repository containing the actual data can be found here : https://github.com/laurieburchell/open-lid-dataset.

The license for this recreation itself follows the original upstream dataset as GPLv3+.

However, individual datasets within it follow each of their own licenses. The "src" column lists the sources. "lang" column lists the language code in alpha-3/ISO 639-2 format followed by the script. "text" column contains the sentence.

Conversion to huggingface dataset and upload to hub done by Chris Ha

Original authors built the dataset for LID models for 201 languages. I thought such a dataset could also be used for a tokenizer for 201 languages.

This dataset was processed and uploaded using huggingface datasets.