Datasets:
Enrich card with number of hours and TTS use
#2
by
ylacombe
HF staff
- opened
README.md
CHANGED
@@ -16,6 +16,8 @@ source_datasets:
|
|
16 |
- original
|
17 |
task_categories:
|
18 |
- automatic-speech-recognition
|
|
|
|
|
19 |
task_ids: []
|
20 |
paperswithcode_id: vctk
|
21 |
train-eval-index:
|
@@ -101,11 +103,12 @@ dataset_info:
|
|
101 |
|
102 |
### Dataset Summary
|
103 |
|
104 |
-
This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.
|
105 |
|
106 |
-
### Supported Tasks
|
107 |
|
108 |
-
|
|
|
109 |
|
110 |
### Languages
|
111 |
|
|
|
16 |
- original
|
17 |
task_categories:
|
18 |
- automatic-speech-recognition
|
19 |
+
- text-to-speech
|
20 |
+
- text-to-audio
|
21 |
task_ids: []
|
22 |
paperswithcode_id: vctk
|
23 |
train-eval-index:
|
|
|
103 |
|
104 |
### Dataset Summary
|
105 |
|
106 |
+
This CSTR VCTK Corpus includes around 44-hours of speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent archive.
|
107 |
|
108 |
+
### Supported Tasks
|
109 |
|
110 |
+
- `automatic-speech-recognition`, `speaker-identification`: The dataset can be used to train a model for Automatic Speech Recognition (ASR). The model is presented with an audio file and asked to transcribe the audio file to written text. The most common evaluation metric is the word error rate (WER).
|
111 |
+
- `text-to-speech`, `text-to-audio`: The dataset can also be used to train a model for Text-To-Speech (TTS).
|
112 |
|
113 |
### Languages
|
114 |
|