mrfakename commited on
Commit
8ab90c9
1 Parent(s): e596d3a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -59,7 +59,7 @@ language:
59
 
60
  This dataset contains approximately 10,000 pairs of text and phonemes from each supported language. We support 15 languages in this dataset, so we have a total of ~150K pairs. This does not include the English-XL dataset, which includes another 100K unique rows.
61
 
62
- Only for training StyleTTS 2-related **open source** models.
63
 
64
  ## Languages
65
 
@@ -84,6 +84,8 @@ We support 15 languages, which means we have around 150,000 pairs of text and ph
84
 
85
  ## License + Credits
86
 
 
 
87
  Source data comes from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0.
88
 
89
  ## Processing
@@ -100,4 +102,4 @@ We utilized the following process to preprocess the dataset:
100
 
101
  ## Note
102
 
103
- East-asian languages are experimental and in beta. We do not distinguish between chinese traditional and simplified, the dataset consists mainly of simplified chinese. We recommend converting characters to simplified chinese during inference using a library such as `hanziconv` or `chinese-converter`.
 
59
 
60
  This dataset contains approximately 10,000 pairs of text and phonemes from each supported language. We support 15 languages in this dataset, so we have a total of ~150K pairs. This does not include the English-XL dataset, which includes another 100K unique rows.
61
 
62
+ This dataset is for training **open source** models only.
63
 
64
  ## Languages
65
 
 
84
 
85
  ## License + Credits
86
 
87
+ This dataset is for training **open source** models only.
88
+
89
  Source data comes from [Wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) and is licensed under CC-BY-SA 3.0. This dataset is licensed under CC-BY-SA 3.0.
90
 
91
  ## Processing
 
102
 
103
  ## Note
104
 
105
+ East-Asian languages are experimental. We do not distinguish between Traditional and Simplified Chinese. The dataset consists mainly of Simplified Chinese in the `zh` split. We recommend converting characters to Simplified Chinese during inference, using a library such as `hanziconv` or `chinese-converter`.