mrfakename's picture
Update README.md
c1b20c3 verified
metadata
license: cc-by-sa-3.0
license_name: cc-by-sa
configs:
  - config_name: en
    data_files: en.json
    default: true
  - config_name: en-xl
    data_files: en-xl.json
  - config_name: ca
    data_files: ca.json
  - config_name: de
    data_files: de.json
  - config_name: es
    data_files: es.json
  - config_name: el
    data_files: el.json
  - config_name: fa
    data_files: fa.json
  - config_name: fi
    data_files: fi.json
  - config_name: fr
    data_files: fr.json
  - config_name: it
    data_files: it.json
  - config_name: pl
    data_files: pl.json
  - config_name: pt
    data_files: pt.json
  - config_name: ru
    data_files: ru.json
  - config_name: sv
    data_files: sv.json
  - config_name: uk
    data_files: uk.json
  - config_name: zh
    data_files: zh.json
language:
  - en
  - ca
  - de
  - es
  - el
  - fa
  - fi
  - fr
  - it
  - pl
  - pt
  - ru
  - sv
  - uk
  - zh
tags:
  - synthetic

Multilingual Phonemes 10K Alpha

This dataset contains approximately 10,000 pairs of text and phonemes from each supported language. We support 15 languages in this dataset, so we have a total of ~150K pairs. This does not include the English-XL dataset, which includes another 100K unique rows.

Languages

We support 15 languages, which means we have around 150,000 pairs of text and phonemes in multiple languages. This excludes the English-XL dataset, which has 100K unique (not included in any other split) additional phonemized pairs.

  • English (en)
  • English-XL (en-xl): ~100K phonemized pairs, English-only
  • Catalan (ca)
  • German (de)
  • Spanish (es)
  • Greek (el)
  • Persian (fa): Requested by @Respair
  • Finnish (fi)
  • French (fr)
  • Italian (it)
  • Polish (pl)
  • Portuguese (pt)
  • Russian (ru)
  • Swedish (sw)
  • Ukrainian (uk)
  • Chinese (zh): Thank you to @eugenepentland for assistance in processing this text, as East-Asian languages are the most compute-intensive!

License + Credits

Source data comes from Wikipedia and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0.

Processing

We utilized the following process to preprocess the dataset:

  1. Download data from Wikipedia by language, selecting only the first Parquet file and naming it with the language code
  2. Process using Data Preprocessing Scripts (StyleTTS 2 Community members only) and modify the code to work with the language
  3. Script: Clean the text
  4. Script: Remove ultra-short phrases
  5. Script: Phonemize
  6. Script: Save JSON
  7. Upload dataset

Note

East-Asian languages are experimental. We do not distinguish between Traditional and Simplified Chinese. The dataset consists mainly of Simplified Chinese in the zh split. We recommend converting characters to Simplified Chinese during inference, using a library such as hanziconv or chinese-converter.