File size: 2,970 Bytes
d40c2c8
 
 
90999b7
5efd1ce
 
 
34744a9
 
5efd1ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c1b20c3
 
d40c2c8
032b161
5efd1ce
2ec9b81
c6d0feb
e596d3a
69d9e2c
4192fd4
 
e596d3a
135779e
4192fd4
e596d3a
4192fd4
 
 
 
01bd3f7
4192fd4
 
 
 
 
 
 
 
1901059
1f1fdb0
477575c
 
 
 
 
 
 
 
 
83459d5
e398fde
 
 
 
477575c
 
69d9e2c
 
8ab90c9
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
---
license: cc-by-sa-3.0
license_name: cc-by-sa
configs:
- config_name: en
  data_files: en.json
  default: true
- config_name: en-xl
  data_files: en-xl.json
- config_name: ca
  data_files: ca.json
- config_name: de
  data_files: de.json
- config_name: es
  data_files: es.json
- config_name: el
  data_files: el.json
- config_name: fa
  data_files: fa.json
- config_name: fi
  data_files: fi.json
- config_name: fr
  data_files: fr.json
- config_name: it
  data_files: it.json
- config_name: pl
  data_files: pl.json
- config_name: pt
  data_files: pt.json
- config_name: ru
  data_files: ru.json
- config_name: sv
  data_files: sv.json
- config_name: uk
  data_files: uk.json
- config_name: zh
  data_files: zh.json
language:
- en
- ca
- de
- es
- el
- fa
- fi
- fr
- it
- pl
- pt
- ru
- sv
- uk
- zh
tags:
- synthetic
---

# Multilingual Phonemes 10K Alpha


This dataset contains approximately 10,000 pairs of text and phonemes from each supported language. We support 15 languages in this dataset, so we have a total of ~150K pairs. This does not include the English-XL dataset, which includes another 100K unique rows.

## Languages

We support 15 languages, which means we have around 150,000 pairs of text and phonemes in multiple languages. This excludes the English-XL dataset, which has 100K unique (not included in any other split) additional phonemized pairs.

* English (en)
* English-XL (en-xl): ~100K phonemized pairs, English-only
* Catalan (ca)
* German (de)
* Spanish (es)
* Greek (el)
* Persian (fa): Requested by [@Respair](https://huggingface.co/Respair)
* Finnish (fi)
* French (fr)
* Italian (it)
* Polish (pl)
* Portuguese (pt)
* Russian (ru)
* Swedish (sw)
* Ukrainian (uk)
* Chinese (zh): Thank you to [@eugenepentland](https://huggingface.co/eugenepentland) for assistance in processing this text, as East-Asian languages are the most compute-intensive!

## License + Credits

Source data comes from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0.

## Processing

We utilized the following process to preprocess the dataset:

1. Download data from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) by language, selecting only the first Parquet file and naming it with the language code
2. Process using [Data Preprocessing Scripts (StyleTTS 2 Community members only)](https://huggingface.co/styletts2-community/data-preprocessing-scripts) and modify the code to work with the language
3. Script: Clean the text
4. Script: Remove ultra-short phrases
5. Script: Phonemize
6. Script: Save JSON
7. Upload dataset

## Note

East-Asian languages are experimental. We do not distinguish between Traditional and Simplified Chinese. The dataset consists mainly of Simplified Chinese in the `zh` split. We recommend converting characters to Simplified Chinese during inference, using a library such as `hanziconv` or `chinese-converter`.