suflaj commited on
Commit
87cdf02
1 Parent(s): ff7114c

Update README.md to reflect code changes

Browse files
Files changed (1) hide show
  1. README.md +8 -12
README.md CHANGED
@@ -49,13 +49,13 @@ The dataset features 8 languages originally seen in FLEURS:
49
  - Polish
50
  - Swedish
51
 
52
- The original FLEURS samples are used as `human` samples, while `synthetic` samples are generated using:
53
 
54
  - [Google Cloud Text-To-Speech](https://cloud.google.com/text-to-speech)
55
  - [Azure Text-To-Speech](https://azure.microsoft.com/en-us/products/ai-services/text-to-speech)
56
  - [Amazon Polly](https://aws.amazon.com/polly/)
57
 
58
- The resulting dataset features roughly twice the samples per language (every `human` sample usually has its `synthetic` counterpart).
59
 
60
 
61
  - **Curated by:** [KONTXT by RealNetworks](https://realnetworks.com/kontxt)
@@ -74,7 +74,7 @@ The original FLEURS dataset was downloaded from [HuggingFace](https://huggingfac
74
 
75
  ## Uses
76
 
77
- This dataset is best used as a difficult test set. Each sample contains an `Audio` feature, and a label: `human` or `synthetic`.
78
 
79
  ### Direct Use
80
 
@@ -102,8 +102,6 @@ To load a different language, change `en_us` into one of the following:
102
 
103
  This dataset only has a `test` split.
104
 
105
- To load only the synthetic samples, append `_without-human` to the name. For example, `en_us` will load the test set also containing the original English FLEURS samples, while `en_us_without-human` will only load the synthetic VITS samples. This is useful if you simply want to include the VITS samples into the original FLEURS-HS test set without duplicating human samples.
106
-
107
  The `trust_remote_code=True` parameter is necessary because this dataset uses a custom loader. To check out which code is being ran, check out the [loading script](./fleurs-hs-vits.py).
108
 
109
  ## Dataset Structure
@@ -112,14 +110,12 @@ The dataset data is contained in the [data directory](https://huggingface.co/dat
112
 
113
  There exists 1 directory per language.
114
 
115
- Within those directories, there is a directory named `splits`; it contains 1 file per split:
116
  - `test.tar.gz`
117
 
118
- Those `.tar.gz` files contain 2 or more directories:
119
- - `human`
120
- - 1 or more directories named after the VITS model being used, ex. `thorsten-vits`
121
 
122
- Each of these directories contain `.wav` files. Keep in mind that these directories can't be merged as they share most of their file names. An identical file name implies a speaker-voice pair, ex. `human/123.wav` and `thorsten-vits/123.wav`.
123
 
124
  Finally, back to the language directory, it contains 3 metadata files, which are not used in the loaded dataset, but might be useful to researchers:
125
  - `recording-metadata.csv`
@@ -136,8 +132,8 @@ A sample contains contains an Audio feature `audio`, and a string `label`.
136
  ```
137
  {
138
  'audio': {
139
- 'path': 'ljspeech-vits/1003119935936341070.wav',
140
- 'array': array([-0.00048828, -0.00106812, -0.00164795, ..., 0., 0., 0.]),
141
  'sampling_rate': 16000
142
  },
143
  'label': 'synthetic'
 
49
  - Polish
50
  - Swedish
51
 
52
+ The `synthetic` samples are generated using:
53
 
54
  - [Google Cloud Text-To-Speech](https://cloud.google.com/text-to-speech)
55
  - [Azure Text-To-Speech](https://azure.microsoft.com/en-us/products/ai-services/text-to-speech)
56
  - [Amazon Polly](https://aws.amazon.com/polly/)
57
 
58
+ Only the test VITS samples are provided. For every VITS voice, which is in practice specific model weights, one sample per transcript is provided.
59
 
60
 
61
  - **Curated by:** [KONTXT by RealNetworks](https://realnetworks.com/kontxt)
 
74
 
75
  ## Uses
76
 
77
+ This dataset is best used as a difficult test set. Each sample contains an `Audio` feature, and a label, which is always `synthetic`; this dataset does not include any human samples.
78
 
79
  ### Direct Use
80
 
 
102
 
103
  This dataset only has a `test` split.
104
 
 
 
105
  The `trust_remote_code=True` parameter is necessary because this dataset uses a custom loader. To check out which code is being ran, check out the [loading script](./fleurs-hs-vits.py).
106
 
107
  ## Dataset Structure
 
110
 
111
  There exists 1 directory per language.
112
 
113
+ Within that directory, there is a directory named `splits`; it contains 1 file per split:
114
  - `test.tar.gz`
115
 
116
+ That `.tar.gz` file contains 1 or more directories, named after the VITS model being used: ex. `thorsten-vits`
 
 
117
 
118
+ Each of these directories contain `.wav` files. Each `.wav` file is named after the ID of its transcript. Keep in mind that these directories can't be merged as they share their file names. An identical file name implies a speaker-voice pair, ex. `human/123.wav` and `thorsten-vits/123.wav`.
119
 
120
  Finally, back to the language directory, it contains 3 metadata files, which are not used in the loaded dataset, but might be useful to researchers:
121
  - `recording-metadata.csv`
 
132
  ```
133
  {
134
  'audio': {
135
+ 'path': 'ljspeech-vits/1660.wav',
136
+ 'array': array([0.00119019, 0.00109863, 0.00106812, ..., 0., 0., 0.]),
137
  'sampling_rate': 16000
138
  },
139
  'label': 'synthetic'